Journal of Automotive Safety and Energy ›› 2021, Vol. 12 ›› Issue (4): 440-455.DOI: 10.3969/j.issn.1674-8484.2021.04. 002
• Review, Progress and Prospects • Previous Articles Next Articles
WANG Hai1(
), XU Yansong1, CAI Yingfeng2,*(
), CHEN Long2
Received:2021-12-15
Online:2021-12-31
Published:2022-01-10
Contact:
CAI Yingfeng
E-mail:wanghai1019@163.com;caicaixiao0304@126.com
CLC Number:
WANG Hai, XU Yansong, CAI Yingfeng, CHEN Long. Overview of intelligent vehicle multi-target detection technology based on multi-sensor fusion[J]. Journal of Automotive Safety and Energy, 2021, 12(4): 440-455.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.journalase.com/EN/10.3969/j.issn.1674-8484.2021.04. 002
| 分级 | 名称 | 持续的车辆横向和纵向运动控制 | 目标和时间探测与响应 | 动态驾驶任务后援 | 设计运行规范 |
|---|---|---|---|---|---|
| 0级 | 应急辅助 | 驾驶员 | 驾驶员及系统 | 驾驶员 | 有限制 |
| 1级 | 部分驾驶辅助 | 驾驶员和系统 | 驾驶员及系统 | 驾驶员 | 有限制 |
| 2级 | 组合驾驶辅助 | 系统 | 驾驶员及系统 | 驾驶员 | 有限制 |
| 3级 | 有条件自动驾驶 | 系统 | 系统 | 动态驾驶任务后援用户 (执行接管后成为驾驶员) | 有限制 |
| 4级 | 高度自动驾驶 | 系统 | 系统 | 系统 | 有限制 |
| 5级 | 完全自动驾驶 | 系统 | 系统 | 系统 | 无限制 |
| 分级 | 名称 | 持续的车辆横向和纵向运动控制 | 目标和时间探测与响应 | 动态驾驶任务后援 | 设计运行规范 |
|---|---|---|---|---|---|
| 0级 | 应急辅助 | 驾驶员 | 驾驶员及系统 | 驾驶员 | 有限制 |
| 1级 | 部分驾驶辅助 | 驾驶员和系统 | 驾驶员及系统 | 驾驶员 | 有限制 |
| 2级 | 组合驾驶辅助 | 系统 | 驾驶员及系统 | 驾驶员 | 有限制 |
| 3级 | 有条件自动驾驶 | 系统 | 系统 | 动态驾驶任务后援用户 (执行接管后成为驾驶员) | 有限制 |
| 4级 | 高度自动驾驶 | 系统 | 系统 | 系统 | 有限制 |
| 5级 | 完全自动驾驶 | 系统 | 系统 | 系统 | 无限制 |
| 传感器 | 优势 | 劣势 | 用途 | 成本 |
|---|---|---|---|---|
| 相机 | 分辨率高 语义性强 数据处理简单 | 雨雾天气效果差 受光照条件影响 容易产生虚警 | 障碍物检测 交通信号灯检测 交通标志检测 车道线,人行横道检测 | 低 |
| 毫米波雷达 | 不受天气和光照影响 测量范围较大 | 不适用于动态物体的检测 易产生误检 | 障碍物检测测距 测速 | 中 |
| 激光雷达 | 检测范围大 检测精度高 | 成本高 雨雾天气效果差 | 障碍物检测 测距 长短时记忆网络(LSTM)技术 | 高 |
| 传感器 | 优势 | 劣势 | 用途 | 成本 |
|---|---|---|---|---|
| 相机 | 分辨率高 语义性强 数据处理简单 | 雨雾天气效果差 受光照条件影响 容易产生虚警 | 障碍物检测 交通信号灯检测 交通标志检测 车道线,人行横道检测 | 低 |
| 毫米波雷达 | 不受天气和光照影响 测量范围较大 | 不适用于动态物体的检测 易产生误检 | 障碍物检测测距 测速 | 中 |
| 激光雷达 | 检测范围大 检测精度高 | 成本高 雨雾天气效果差 | 障碍物检测 测距 长短时记忆网络(LSTM)技术 | 高 |
| 数据集 | Kitti | BDD | nuScenes | Waymo | ONCE |
|---|---|---|---|---|---|
| 交通场景 | 城市 郊区 高速路 | 各种路况 | 城市 | 城市 郊区 | 城市 郊区 |
| 天气场景 | 白天 晴天 | 晴天、多云、阴天、雨天、雪天、雾天 | 白天 | 白天、夜晚、黎明、黄昏、雨天,晴天 | 白天、夜晚 晴天、多云、雨天 |
| 所用传感器 | 激光雷达 灰度相机 彩色相机 GPS | 彩色相机 GPS IMU 陀螺仪 | 相机 激光雷达 彩色雷达 GPS IMU | 激光雷达 相机 | 激光雷达 相机 |
| 提供的数据 | 约1.5万张图像 点云数据 GPS和IMU数据 | 约10万段高清视频 10万张图像 | 约140万张图像 点云数据 | 1 000段驾驶视频 | 约700万张图像 点云数据 |
| 应用场景 | 立体视觉 光流 场景流 SLAM 物体检测与跟踪 车道线检测 语义分割 | 物体检测 车道线检测 驾驶区域检测 语义分割 | 物体检测 语义分割 | 物体检测与跟踪 | 物体检测 |
| 特点 | 目前最著名的自动驾驶数据集,提供多种优秀的基准 | 有各种注释的大规模车载数据集 | 注释多 具有雷达 | 场景多 | 中国场景 迄今最大的数据集 |
| 数据集 | Kitti | BDD | nuScenes | Waymo | ONCE |
|---|---|---|---|---|---|
| 交通场景 | 城市 郊区 高速路 | 各种路况 | 城市 | 城市 郊区 | 城市 郊区 |
| 天气场景 | 白天 晴天 | 晴天、多云、阴天、雨天、雪天、雾天 | 白天 | 白天、夜晚、黎明、黄昏、雨天,晴天 | 白天、夜晚 晴天、多云、雨天 |
| 所用传感器 | 激光雷达 灰度相机 彩色相机 GPS | 彩色相机 GPS IMU 陀螺仪 | 相机 激光雷达 彩色雷达 GPS IMU | 激光雷达 相机 | 激光雷达 相机 |
| 提供的数据 | 约1.5万张图像 点云数据 GPS和IMU数据 | 约10万段高清视频 10万张图像 | 约140万张图像 点云数据 | 1 000段驾驶视频 | 约700万张图像 点云数据 |
| 应用场景 | 立体视觉 光流 场景流 SLAM 物体检测与跟踪 车道线检测 语义分割 | 物体检测 车道线检测 驾驶区域检测 语义分割 | 物体检测 语义分割 | 物体检测与跟踪 | 物体检测 |
| 特点 | 目前最著名的自动驾驶数据集,提供多种优秀的基准 | 有各种注释的大规模车载数据集 | 注释多 具有雷达 | 场景多 | 中国场景 迄今最大的数据集 |
| [1] | XU Weichao, LI Baojun, LIU sun, et al. Real-time object detection and semantic segmentation for autonomous driving[C]// Proceed Auto Target Recog Navig, Wuhan, China, 2018. |
| [2] | WANG Zhangjing, WU Yu, NIU Qingqing. Multi-sensor fusion in automated driving: A survey[J]. IEEE Access, 2020, 8:2847-68. |
| [3] | V Brummelen V J, O’Brien M, Gruyer D, et al. Autonomous vehicle perception: The technology of today and tomorrow[J]. Transport Res Part C: Emerg Tech, 2018, 89:384-406. |
| [4] | CUI Yaodong, CHEN Ren, CHU Wenbo, et al. Deep learning for image and point cloud fusion in autonomous driving: A review[J]. IEEE Trans Intel Transport Syst, 2020:1-18. |
| [5] | Feng D, Haase-Schütz C, Rosenbaum L, et al. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges[J]. IEEE Trans Intel Transport Syst, 2019, 22(3): 1341-1360. |
| [6] | ZHAO Zhongqiu, ZHENG Peng, XU Shoutao, et al. Object detection with deep learning: A review[J]. 2018: arXiv e-prints: 1807.05511. |
| [7] | Janai J, Güney F, Behl A, et al. Computer vision for autonomous vehicles: Problems, datasets and state-of-the-art[J]. Found Trends® in Comp Graph Visi, 2017, 12(1-3):1-308 |
| [8] | Maurer M, Gerdes J C, Lenz B, et al. Autonomous vehicles and autonomous driving in freight transport[M]// Autonomous Driving, Springer, Berlin, Heidelberg, 2016: 365-385. |
| [9] | Zoltán P, Iván E, Levente H. Accurate calibration of multi-LiDAR-Multi-camera systems[J]. Sensors, 2018, 18(7):2139-2161 |
| [10] | PAN Wei: Lucas C, Tasmia R, et al. LiDAR and camera detection fusion in a real time industrial multi-sensor collision avoidance system[J]. Electronics, 2018, 7(6):84-84. |
| [11] | 薛良金. 毫米波工程基础[M]. 哈尔滨: 哈尔滨工业大学出版社, 2004: 23-24. |
| XUE Liangjin. Foundations for Millimeter Wave Engineering[M]. Harbin: Harbin Institute of Technology Press, 2004: 23-24. | |
| [12] | Alencar F, Rosero L, Filho C M, et al. Fast metric tracking by detection system: Radar blob and camera fusion[C]// Proceed 2015 12th Latin American Robotics Symp 2015 3rd Brazilian Symp Robotics (LARS-SBR), Recife, Brazil, 2016. |
| [13] | Lee S, Yoon Y J, Lee J E, et al. Human-vehicle classification using feature-based SVM in 77-GHz automotive FMCW radar[J]. IET Radar Sonar ? Navigation, 2017, 11(10):1589-96. |
| [14] | Etinger A, Balal N, Litvak B, et al. Non-imaging MM-Wave FMCW sensor for pedestrian detection[J]. IEEE Sensors J, 2014, 14(4):1232-1237. |
| [15] | Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. Int’l J Comput Vision, 2002, 47(1):7-42. |
| [16] | Sungdae S, Juil S, Kiho K. Indirect correspond- ence-based robust extrinsic calibration of LiDAR and camera[J]. Sensors, 2016, 16(6):933. |
| [17] | Cho H, Seo Y W, Kumar B, et al. A multi-sensor fusion system for moving object detection and tracking in urban driving environments[C]// Proceed IEEE Int’l Conf Robot Auto, Hong Kong, China, 2014. |
| [18] | JI Rongrong, DUAN Lingyu, CHEN Jie, et al. Mining compact bag-of-patterns for low bit rate mobile visual search[J]. Image Processing, 2014, 23(7):3099-3113 |
| [19] | ZHAO Sicheng, CHEN Lujin, YAO Hongxun, et al. Strategy for dynamic 3D depth data matching towards robust action retrieval[J]. Neurocomputing, 2015, 151(mar.5pt.2):533-543. |
| [20] | GUAN Dayan, CAO Yanpeng, YANG Jiangxin, et al. Fusion of multispectral data through illumination-aware deep neural networks for pedestrian detection[J]. Info Fusion, 2018, 50:1097-1105. |
| [21] | Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Advan Neural Info Process Syst, 2012, 25:1097-105. |
| [22] | Lagos-Lvarez B, Padilla L, Mateu J, et al. A Kalman filter method for estimation and prediction of space-time data with an autoregressive structure[J]. J Statistical Plan Infe, 2019, 203:117-130. |
| [23] | Law H, DENG Jia. Cornernet: Detecting objects as paired keypoints[C]// Proceed Europ Conf Comput Vision (ECCV), Munich Germany, 2018. |
| [24] | ZHOU Xingyi, WANG Dequan, Krähenbühl P. Objects as points[J]. arXiv preprint arXiv:190407850, 2019. |
| [25] | ZHOU Xingyi, ZHUO Jiacheng, Krhenbühl P. Bottom-up object detection by grouping extreme and center points[C]// Proceed 2019 IEEE Conf Comput Visi Pattern Recog (CVPR), Long Beach,CA, USA, 2019. |
| [26] | Girshick R. Objects as points[J]. arXiv e-prints: 1504.08083, 2015. |
| [27] | Ren S, He K, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Trans Pattern Analy Mach Intel, 2017, 39(6):1137-1149. |
| [28] | LIU Wei, Anguelov D, Erhan D, et al. SSD: Single shot multiBox detector[C]// Proceed Europ Conf Comput Vision, Amsterdam, The Netherlands, 2016. |
| [29] | Redmon J, Farhadi A. YOLO9000: Better, faster, stronger[J]. IEEE Conf Comput Vision Pattern Recog, 2017: 6517-6525. |
| [30] | Redmon J, Farhadi A. YOLOv3: An incremental improvement[J]. arXiv e-prints: 1804.02767, 2018. |
| [31] | WANG Chienyao, Hong-Yuan Mark Liao. YOLOv4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv: 2004.10934, 2020. |
| [32] | Sermanet P, Eigen D, Zhang X, et al. OverFeat: Integrated recognition, localization and detection using convolutional networks[J]. Eprint Arxiv: 1312.6229v3, 2013. |
| [33] | Qi C R, SU Hao, Mo K, et al. PointNet: Deep learning on point sets for 3D classification and segmentation[C]// Proceed 2017 IEEE Conf Comput Vision Pattern Recog (CVPR), Hawaii,USA, 2017. |
| [34] | Qi C R, LI Yi, SU Hao, et al. PointNet++: Deep hierarchical feature learning on point sets in a metric space[J]. arXiv preprint arXiv:1706.02413, 2017. |
| [35] | SHI Shaoshuai, WANG Xiaogang, LI Hongsheng. PointRCNN: 3D object proposal generation and detection from point cloud[J]. arXiv preprint arXiv:1812.04244, 2018. |
| [36] | ZHOU Yi, Tuzel O. VoxelNet: End-to-end learning for point cloud based 3D object detection[C]// Proceed 2018 IEEE Conf Comput Vision Pattern Recog (CVPR), Salt Lake City, Utah, 2018 |
| [37] | YAN Yan, MAO Yuxing, LI Bo. SECOND: Sparsely embedded convolutional detection[J]. Sensors, 2018, 18(10):3337. |
| [38] | SHI Shaoshuai, GUO Chaoxu, LI Jiang, et al. PV-RCNN: Point-voxel feature set abstraction for 3d object detection[C]// Proceed IEEE/CVF Conf ComputVision Pattern Recog, Seattle, WA, United States, 2020: 10529-10538. |
| [39] | Guan P, Neumann U. 3D point cloud object detection with multi-view convolutional neural network[C]// Proceed 2016 23rd Int’l Conf Pattern Recog (ICPR), Cancun, Mexico, 2016. |
| [40] | Kang Y, Yin H, Berger C. Test your self-driving algorithm: An overview of publicly available driving datasets and virtual testing environments[J]. IEEE Trans Intel Vehi, 2019, 4(2):171-185. |
| [41] | Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]// Proceed 2012 IEEE Conf Comput Vision Pattern Recog, Providence, Rhode Island, 2012, 2012. |
| [42] | YU Fisher, CHEN Haofeng, WANG Xin, et al. Bdd100k: A diverse driving dataset for heterogeneous multitask learning[C]// Proceed IEEE/CVF Conf Comput Vision Pattern Recog, Seattle, WA, United States, 2020: 2636-2645. |
| [43] | Caesar H, Bankiti V, Lang A H, et al. nuScenes: A multimodal dataset for autonomous driving[C]// 2020 IEEE/CVF Conf Comput Vision Pattern Recog (CVPR), 2020: 11618-11628. |
| [44] | Sun P, Kretzschmar H, Dotiwalla X, et al. Scalability in perception for autonomous driving: Waymo open dataset[C]// Proceed IEEE / CVF Conf Comput Vision Pattern Recog, Seattle, WA, United States, 2020: 2446-2454. |
| [45] | MAO Jiageng, NIU Minzhe, JIANG Chenhan, et al. One million scenes for autonomous driving: Once dataset[J]. arXiv preprint arXiv: 2106.11037. |
| [46] | Meyer G P, Charland J, Hegde D, et al. Sensor fusion for joint 3d object detection and semantic segmentation[C]// Proceed IEEE/CVF Conf Comput Vision Pattern Recog Workshops. Long Beach, CA, USA, 2019. |
| [47] | LIANG Ming, YANG Bin, WANG Shenlong, et al. Deep continuous fusion for multi-sensor 3D object detection[C]// Proceed Europ Conf Comput Vision (ECCV). Munich Germany, 2018: 641-656. |
| [48] | Shin K, Kwon Y P, Tomizuka M. Roarnet: A robust 3d object detection based on region approximation refinement[C]// 2019 IEEE Intel Vehi Symp (IV). France, IEEE, 2019: 2510-2515. |
| [49] | Ku J, Mozifian M, Lee J, et al. Joint 3d proposal generation and object detection from view aggregation[C]// 2018 IEEE/RSJ Int’l Conf Intel Robots Syst (IROS), Madrid, Spain, IEEE, 2018: 1-8. |
| [50] | CHEN Xiaozhi, MA Huimin, WAN Ji, et al. Multi-view 3D object detection network for autonomous driving[C]// Proceed 2017 IEEE Conf Comput Vision Pattern Recog (CVPR), Hawaii, USA, 2017. |
| [51] | Qi C R, LIU Wei, WU Chenxia, et al. Frustum pointNets for 3D object detection from RGB-D data[C]// Proceed 2018 IEEE/CVF Conf Comput Vision Pattern Recog (CVPR), Salt Lake City, Utah, 2018. |
| [52] | Qi C R, LIU Wei, WU Chenxia, et al. Frustum pointnets for 3d object detection from rgb-d data[C]// Proceed IEEE Conf Comput Vision Pattern Recog. Salt Lake City, Utah, 2018: 918-927. |
| [53] | SONG Weinan, YUAN Liang, WANG Kun, et al. T-Net: A Template-supervised network for task-specific feature extraction in biomedical image analysis[J]. arXiv preprint arXiv: arXiv: 2002.08406. |
| [54] | Vora S, Lang A H, Helou B, et al. Pointpainting: Sequential fusion for 3d object detection[C]// Proceed IEEE/CVF Conf Comput Vision Pattern Recog. Seattle, WA, United States, 2020: 4604-4612. |
| [55] | Sindagi V A, Zhou Y, Tuzel O. MVX-Net: Multimodal Voxelnet for 3D object detection[J]. IEEE, 2019 IEEE, 2019, 2019:7276-7282. |
| [56] | JIANG Qiuyu, ZHANG Lijun, MENG Dejian. Target Detection algorithm based on MMW radar and camera fusion[C]// Proceed 2019 IEEE Intel Transport Syst Conf - ITSC, Auckland, New Zealand, 2019. |
| [57] | Chadwick S, Maddern W, Newman P. Distant vehicle detection using radar and vision[C]// 2019 Int’l Conf Robot Autom (ICRA). IEEE, Montreal, Canada, 2019: 8311-8317. |
| [58] | HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conf Comput Vision Pattern Recog (CVPR), Las Vegas, USA, 2016. |
| [59] | John V, Mita S. RVNet: Deep sensor fusion of monocular camera and radar for image-based obstacle detection in challenging environments[C]// Proceed Pacific-Rim Symp Image and Video Tech, Sydney, NSW, Australia, 2019. |
| [60] | WANG Xiao, XU Linhai, SUN Hongbin, et al. On-road vehicle detection and tracking using MMW radar and Monovision fusion[J]. IEEE Trans Intel Transport Syst, 2016, 17(7):2075-2084. |
| [61] | WANG Jiangang, CHEN Simonjian, ZHOU Lubing, et al. Vehicle detection and width estimation in rain by fusing radar and vision[C]// Proceed 2018 15th Int’l Conf Contr, Autom, Robot Vision (ICARCV), Salt Lake City, Utah, 2018. |
| [62] | 王海, 刘明亮, 蔡英凤, 等. 基于激光雷达与毫米波雷达融合的车辆目标检测算法[J]. 江苏大学学报(自然科学版), 2021, 42(4):6. |
| WANG Hai, LIU Mingliang, CAI Yingfeng, et al. Vehicle target detection algorithm based on fusion of laser radar and millimeter wave radar.[J]. J Jiangsu Univ (Nat Sci Edit), 2021, 42(4):389-394. (in Chinese) | |
| [63] | LIU Ze, CAI Yingfeng, WANG Hai, et al. Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions[J]. IEEE Trans Intel Transport Syst, 2021, 99:1-14. |
| [64] | Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A Simple way to prevent neural networks from overfitting[J]. J Mach Learn Res, 2014, 15(1):1929-1958. |
| [65] | Yun S, Han D, Chun S, et al. CutMix: Regularization strategy to train strong classifiers with localizable features[C]// Proceed IEEE/CVF Int’l Conf Comput Vision, Seoul, Korea, 2019: 6023-6032. |
| [66] | Yoo J, Ahn N, Sohn K A. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy[C]// Proceed 2020 IEEE/CVF Conf Comput Vision Pattern Recog (CVPR), Seattle, WA, United States, 2020. |
| [67] | Vu T D, Aycard O, Tango F. Object perception for intelligent vehicle applications: A multi-sensor fusion approach[C] // 2014 IEEE Intel Vehi Symp Proceed. MI USA, IEEE, 2014: 774-780. |
| [68] | LIANG Ming, YANG Bin, CHEN Yun, et al. Multi-task multi-sensor fusion for 3d object detection[C] // Proceed IEEE/CVF Conf Comput Vision Pattern Recog, Long Beach, CA, USA, 2019: 7345-7353. |
| [1] | FANG Liang, GUAN Zhiwei, WANG Tao, GONG Jinfeng, DU Feng. Collision avoidance model and its validation for intelligent vehicles based on deep learning LSTM [J]. Journal of Automotive Safety and Energy, 2022, 13(1): 104-110. |
| [2] | ZHAO Shuen, CHEN Wenbin, DENG Zhaoxue, LIU Wei. Trajectory tracking control for intelligent vehicles driving in curved road based on expanded state observers [J]. Journal of Automotive Safety and Energy, 2022, 13(1): 112-121. |
| [3] | WU Yimin, ZHENG Kaiyuan, GAO Bolin, CHEN Ming, WANG Yifeng. Roadside multi-sensor fusion based on adaptive extended Kalman filter [J]. Journal of Automotive Safety and Energy, 2021, 12(4): 522-527. |
| [4] | CAI Guoshun, LIU Haoji, FENG Jiwei, XU Liwei, YIN Guodong. Review on the research of motion planning and control for intelligent vehicles [J]. Journal of Automotive Safety and Energy, 2021, 12(3): 279-297. |
| [5] | HU Yuanzhi,LIU Junsheng,HE Jia,XIAO Hang,SONG Jia . Vehicle object detection method based on data fusion of LADAR points and image [J]. Journal Of Automotive Safety And Energy, 2019, 10(4): 451-458. |
| [6] | JIN Zhilin, HE Linxuan, ZHAO Wanzhong . Detection and tracking method of lane line for intelligent vehicles under complex illumination condition [J]. Journal Of Automotive Safety And Energy, 2019, 10(4): 459-466. |
| [7] | WEI Minxiang, TENG Decheng . Algorithm for lane region segmentation based on fullyconvolutional-network [J]. Journal Of Automotive Safety And Energy, 2019, 10(3): 334-341. |
| [8] | LI Shengbo, GUAN Yang, HOU Lian, GAO Hongbo, DUAN Jingliang, LIANG Shuang,WANG Yu, CHENG Bo, LI Keqiang, REN Wei, LI Jun. Key technique of deep neural network and its applications in autonomous driving [J]. Journal Of Automotive Safety And Energy, 2019, 10(2): 119-145. |
| [9] | RONG Qin, WU Xiaodong, XU Min. Functional safety concept design for steer-by-wire system of road vehicle based on the ISO [J]. Journal Of Automotive Safety And Energy, 2018, 9(3): 250-257. |
| [10] | GUO Jinghua, LI Keqiang, LUO Yugong. Review on the research of motion control for intelligent vehicles [J]. Journal of Automotive Safety and Energy, 2016, 07(02): 151-159. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||