Advanced Search
    Zhang Yanyong, Zhang Sha, Zhang Yu, Ji Jianmin, Duan Yifan, Huang Yitong, Peng Jie, Zhang Yuxiang. Multi-Modality Fusion Perception and Computing in Autonomous Driving[J]. Journal of Computer Research and Development, 2020, 57(9): 1781-1799. DOI: 10.7544/issn1000-1239.2020.20200255
    Citation: Zhang Yanyong, Zhang Sha, Zhang Yu, Ji Jianmin, Duan Yifan, Huang Yitong, Peng Jie, Zhang Yuxiang. Multi-Modality Fusion Perception and Computing in Autonomous Driving[J]. Journal of Computer Research and Development, 2020, 57(9): 1781-1799. DOI: 10.7544/issn1000-1239.2020.20200255

    Multi-Modality Fusion Perception and Computing in Autonomous Driving

    • The goal of autonomous driving is to provide a safe, comfortable and efficient driving environment for people. In order to have wide-spread deployment of autonomous driving systems, we need to process the sensory data from multiple streams in a timely and accurate fashion. The challenges that arise are thus two-fold: leveraging the multiple sensors that are available on autonomous vehicles to boost the perception accuracy; jointly optimizing perception models and the underlying computing models to meet the real-time requirements. To address these challenges, this paper surveys the latest research on sensing and edge computing for autonomous driving and presents our own autonomous driving system, Sonic. Specifically, we propose a multi-modality perception model, ImageFusion, that combines the lidar data and camera data for 3D object detection, and a computational optimization framework, MPInfer.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return