高级检索

    基于多模态融合的自动驾驶感知及计算

    Multi-Modality Fusion Perception and Computing in Autonomous Driving

    • 摘要: 自动驾驶技术的目标是为人类提供一个安全、舒适、高效的驾驶环境.随着自动驾驶系统在真实环境中的部署,如何满足自动驾驶系统实时运行的需求是目前急需解决的问题.而自动驾驶实时运行的挑战主要集中在:在计算资源有限的情况下,融合多模态的感知数据提升感知模块的精度;对任务进行合理的分配,并在不影响精度的情况下对感知任务进行计算优化.总结了近年来自动驾驶技术在感知、计算方面的最新研究进展,建立了自己的智能小车自动驾驶系统Sonic,系统性地分析和比较了自动驾驶感知算法,提出了自己的融合感知算法ImageFusion;并且针对自动驾驶的实时性问题,推出了新的计算优化框架MPInfer.

       

      Abstract: The goal of autonomous driving is to provide a safe, comfortable and efficient driving environment for people. In order to have wide-spread deployment of autonomous driving systems, we need to process the sensory data from multiple streams in a timely and accurate fashion. The challenges that arise are thus two-fold: leveraging the multiple sensors that are available on autonomous vehicles to boost the perception accuracy; jointly optimizing perception models and the underlying computing models to meet the real-time requirements. To address these challenges, this paper surveys the latest research on sensing and edge computing for autonomous driving and presents our own autonomous driving system, Sonic. Specifically, we propose a multi-modality perception model, ImageFusion, that combines the lidar data and camera data for 3D object detection, and a computational optimization framework, MPInfer.

       

    /

    返回文章
    返回