欢迎访问《汽车安全与节能学报》,

JASE ›› 2019, Vol. 10 ›› Issue (4): 451-458.DOI: 10.3969/j.issn.1674-8484.2019.04.006

• 汽车安全 • 上一篇    下一篇

基于激光雷达点云与图像融合的车辆目标检测方法

胡远志1,刘俊生2,何佳3,肖航2,宋佳2#br#   

  1. (1. 汽车噪声振动和安全技术国家重点实验室,重庆 400054,中国;
    2. 重庆理工大学 汽车零部件先进制造技 术教育部重点实验室,重庆400054,中国;
    3. 中国汽车技术研究中心 汽车工程研究院,天津 300300,中国)
  • 收稿日期:2019-04-17 出版日期:2019-12-31 发布日期:2020-01-01
  • 作者简介:第一作者 / First author : 胡远志(1977—),男(汉),湖南,教授。E-mail: yuanzhihu@cqut.edu.cn。
  • 基金资助:
    国家重点研发计划(2017YFB0102500);汽车噪声振动和安全技术国家重点实验室开放基金资助 (NVHSKL-201908);中国汽车技术研究中心有限公司重点课题(16190125)。

Vehicle object detection method based on data fusion of LADAR points and image

HU Yuanzhi1,LIU Junsheng2,HE Jia3,XIAO Hang2,SONG Jia2   

  1. (1. State Key Laboratory of Vehicle NVH and Safety Technology, Chongqing 400054, China; 
    2. Key Laboratory of Advanced Manufacturing Technology for Automobile Parts, Chongqing University of Technology, Chongqing 400054, China; 
    3. China Automotive Technology & Research Center, Automotive Engineering Research Institute, Tianjin 300300, China)
  • Received:2019-04-17 Online:2019-12-31 Published:2020-01-01

摘要: 提出了一种基于 4线激光雷达(LADAR)与摄像头融合的方案,用于提高智能车辆对车辆目 标的检测精度。首先调用卷积神经网络来识别图像中的目标,然后将点云与图像数据进行空间匹配, 最后采用 R-Tree 算法快速配准检测框与相应的点云数据。利用点云的深度信息就能获得目标的准确 位置。经过真实道路场景采集的图像与点云数据进行测试,结果表明:该融合算法将漏检概率(FN) 从 Mask R-CNN 方法的 14.86% 降低到 8.03% ;因而,该融合算法能够有效的降低图像漏检的概率。

关键词: 智能车辆,  目标检测, 激光雷达( LADAR), 点云数据, 图像检测, 卷积神经网络, 多传感器融合,  R-Tree 算法

Abstract:  A fusion scheme with 4 lines LADAR (laser detection and ranging) sensor and camera was adopted to provide more precise detection for traffic, for an intelligent vehicle. Firstly, by using deep learning technique to detect image. Then, mapping LADAR data to image through a space transfer matrix. Finally, by using an R-Tree algorithm to quickly match LADAR points and corresponding detection boxes. The traffic’s real location was calculated easily by laser’s ranging. The proposed fusion frame was tested by images and point cloud data collected from real motorway scenes. The results show that the false negative (FN) of the fusion frame method is 8.03%, which is lower than that of 14.86% come from the Mask R-CNN method. Therefore, the fusion data could decrease probability of the FN compare with single data.

Key words: intelligent vehicles, object detection, laser detection and ranging (LADAR), point cloud data, image detection, convolutional neural network, multi-sensors fusion, R-Tree algorithm