• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zheng Han, Wang Ning, Ma Xinzhu, Zhang Hong, Wang Zhihui, Li Haojie. Point Cloud Scene Flow Propagation Update Method Based on Neighborhood Consistency[J]. Journal of Computer Research and Development, 2023, 60(2): 426-434. DOI: 10.7544/issn1000-1239.202110745
Citation: Zheng Han, Wang Ning, Ma Xinzhu, Zhang Hong, Wang Zhihui, Li Haojie. Point Cloud Scene Flow Propagation Update Method Based on Neighborhood Consistency[J]. Journal of Computer Research and Development, 2023, 60(2): 426-434. DOI: 10.7544/issn1000-1239.202110745

Point Cloud Scene Flow Propagation Update Method Based on Neighborhood Consistency

Funds: This work was supported by the National Natural Science Foundation of China (61976038,61932020, 61772108, U1908210) and the Fundamental Research Funds for the Central Universities (DUT20GF18).
More Information
  • Received Date: July 08, 2021
  • Revised Date: February 24, 2022
  • Available Online: February 26, 2023
  • Scene flow is a 3D motion field between continuous dynamic scenes, which is widely applied in robotics and autonomous driving tasks. Existing methods ignore the correlation of point cloud points and focus only on the point-by-point matching relationship between the source point cloud and target point cloud, which is still challenging to estimate the scene flow accurately at the points with insufficient local feature information, because the matching relationship depends entirely on the feature information of the point cloud data. Considering the correlation property of the source point cloud’s local regions, the NCPUM (neighborhood consistency propagation update method) is proposed to propagate the scene flow from high-confidence points to low-confidence points in local regions, so as to optimize the scene flow at the points with insufficient local feature information. Specifically, NCPUM consists of two modules: the confidence prediction module, which predicts the confidence of the source point cloud according to the priori distribution map of scene flow; the scene flow propagation module, which updates the scene flow of the low confidence point set based on the local area consistency constraint. We evaluate NCPUM on both challenging synthetic data from Flyingthing3D and real Lidar scans from KITTI, and experiment results outperform by previous methods a large margin in accuracy, especially on KITTI dataset, because the neighborhood consistency is more applicable with the a priori assumptions of real Lidar scans.

  • [1]
    Ouyang Wanli, Zeng Xingyu, Wang Xiaogang, et al. DeepID-Net: Deformable deep convolutional neural networks for object detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016, 39(7): 1320−1334
    [2]
    廖瑞杰,杨绍发,孟文霞,等. SegGraph: 室外场景三维点云闭环检测算法[J]. 计算机研究与发展,2019,56(2):338−348 doi: 10.7544/issn1000-1239.2019.20180092

    Liao Ruijie, Yang Shaofa, Meng Wenxia, et al. SegGraph: An algorithm for loop-closure detection in outdoor scenes using 3D point clouds[J]. Journal of Computer Research and Development, 2019, 56(2): 338−348 (in Chinese) doi: 10.7544/issn1000-1239.2019.20180092
    [3]
    Liu Xingyu, Qi C R, Guibas L J . FlowNet3D: Learning scene flow in 3D point clouds[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2019: 529−537
    [4]
    Gu Xiuye, Wang Yijie, Wu Chongruo, et al. HPLFlowNet: Hierarchical permutohedral lattice FlowNet for scene flow estimation on large-scale point clouds[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2019: 3254−3263
    [5]
    张燕咏,张莎,张昱,等. 基于多模态融合的自动驾驶感知及计算[J]. 计算机研究与发展,2020,57(9):1781−1799

    Zhang Yanyong, Zhang Sha, Zhang Yu, et al. Multi-modality fusion perception and computing in autonomous driving[J]. Journal of Computer Research and Development, 2020, 57(9): 1781−1799 (in Chinese)
    [6]
    Wu Wenxuan, Wang Zhiyuan, Li Zhuwen, et al. PointPWC-Net: Cost volume on point clouds for (self-)supervised scene flow estimation[C] //Proc of European Conf on Computer Vision. Piscataway, NJ: IEEE, 2020: 88−107
    [7]
    Vedula S, Baker S, Rander P, et al. Three-dimensional scene flow[C] //Proc of Int Conf on Computer Vision. Piscataway, NJ: IEEE, 1999: 722−729
    [8]
    Jiang Huaizu, Sun Deqing, Jampani V, et al. SENSE: A shared encoder network for scene-flow estimation[C] //Proc of Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2019: 3194−3203
    [9]
    Teed Z, Deng Jia. RAFT-3D: Scene flow using rigid-motion embeddings[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2021: 8375−8384
    [10]
    Menze M, Geiger A. Object scene flow for autonomous vehicles[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2015: 3061−3070
    [11]
    Ma Weichiu, Wang Shenlong, Hu Rui , et al. Deep rigid instance scene flow[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2019: 3614−3622
    [12]
    Quiroga J, Brox T, F Devernay, et al. Dense semi-rigid scene flow estimation from RGBD Images[C] //Proc of European Conf on Computer Vision. Piscataway, NJ: IEEE, 2014: 567−582
    [13]
    Wu Wenxuan, Qi Zhongang, Li Fuxin. PointConv: Deep convolutional networks on 3D point clouds[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2019: 9621−9630
    [14]
    Qi C R, Su Hao, Mo Kaichun, et al. PointNet: Deep learning on point sets for 3D classification and segmentation[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2017: 77−85
    [15]
    Qi C R, Yi Li, Su Hao, et al. PointNet++: Deep hierarchical feature learning on point sets in a metric space[C] //Proc of Neural Information Processing Systems. Cambridge, MA: MIT Press, 2017: 5099−5108
    [16]
    Puy G, Boulch A, Marlet R. FLOT: Scene flow on point clouds guided by optimal transport[C] //Proc of European Conf on Computer Vision. Piscataway, NJ: IEEE, 2020: 527−544
    [17]
    Wang Guangming, Wu Xinrui, Liu Zhe, et al. Hierarchical attention learning of scene flow in 3D point clouds[J]. IEEE Transactions Image Process, 2021(30): 5168−5181
    [18]
    Gojcic Z, Litany O, Wieser A, et al. Weakly supervised learning of rigid 3D scene flow[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2021: 5692−5703
    [19]
    Chen Yuhua, Gool L V, Schmid C, et al. Consistency guided scene flow estimation[C] //Proc of European Conf on Computer Vision. Piscataway, NJ: IEEE, 2020: 125−141
    [20]
    Hui T W, Chen C L. LiteFlowNet3: Resolving correspondence ambiguity for more accurate optical flow estimation[C] //Proc of European Conf on Computer Vision. Piscataway, NJ: IEEE, 2020: 169−184
    [21]
    李曈,马伟,徐士彪,等. 适应立体匹配任务的端到端深度网络[J]. 计算机研究与发展,2020,57(7):1531−1538

    Li Tong, Ma Wei, Xu Shibiao, et al. Task-adaptive end-to-end networks for stereo matching[J]. Journal of Computer Research and Development, 2020, 57(7): 1531−1538 (in Chinese)
    [22]
    Mayer N, Ilg E, Hausser P, et al. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation[C] //Proc of Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2016: 4040−4048
    [23]
    Menze M, Heipke C, Geiger A. Joint 3D estimation of vehicles and scene flow[J]. International Society for Photogrammetry and Remote Sensing Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2015, II-3/W5: 427−434
  • Related Articles

    [1]Zhang Yuan, Cao Huawei, Zhang Jie, Shen Yue, Sun Yiming, Dun Ming, An Xuejun, Ye Xiaochun. Survey on Key Technologies of Graph Processing Systems Based on Multi-core CPU and GPU Platforms[J]. Journal of Computer Research and Development, 2024, 61(6): 1401-1428. DOI: 10.7544/issn1000-1239.202440073
    [2]Zhang Jun, Xie Jingcheng, Shen Fanfan, Tan Hai, Wang Lümeng, He Yanxiang. Performance Optimization of Cache Subsystem in General Purpose Graphics Processing Units: A Survey[J]. Journal of Computer Research and Development, 2020, 57(6): 1191-1207. DOI: 10.7544/issn1000-1239.2020.20200113
    [3]Duan Qiong, Tian Bo, Chen Zheng, Wang Jie, He Zengyou. CUDA-TP: A GPU-Based Parallel Algorithm for Top-Down Intact Protein Identification[J]. Journal of Computer Research and Development, 2018, 55(7): 1525-1538. DOI: 10.7544/issn1000-1239.2018.20170080
    [4]Feng Jiaying, Zhang Xiaowang, Feng Zhiyong. Parallel Algorithms for RDF Type-Isomorphism on GPU[J]. Journal of Computer Research and Development, 2018, 55(3): 651-661. DOI: 10.7544/issn1000-1239.2018.20160845
    [5]Su Huayou, Wen Wen, Li Dongsheng. Optimization and Parallelization Single Particle Cryo-EM Software RELION with GPU[J]. Journal of Computer Research and Development, 2018, 55(2): 409-417. DOI: 10.7544/issn1000-1239.2018.20160873
    [6]Zhang Heng, Zhang Libo, WuYanjun. Large-Scale Graph Processing on Multi-GPU Platforms[J]. Journal of Computer Research and Development, 2018, 55(2): 273-288. DOI: 10.7544/issn1000-1239.2018.20170697
    [7]Zheng Zhen, Zhai Jidong, Li Yan, Chen Wenguang. Workload Analysis for Typical GPU Programs Using CUPTI Interface[J]. Journal of Computer Research and Development, 2016, 53(6): 1249-1262. DOI: 10.7544/issn1000-1239.2016.20148354
    [8]Tang Liang, Luo Zuying, Zhao Guoxing, and Yang Xu. SOR-Based P/G Solving Algorithm of Linear Parallelism for GPU Computing[J]. Journal of Computer Research and Development, 2013, 50(7): 1491-1500.
    [9]Cai Yong, Li Guangyao, and Wang Hu. Parallel Computing of Central Difference Explicit Finite Element Based on GPU General Computing Platform[J]. Journal of Computer Research and Development, 2013, 50(2): 412-419.
    [10]Hu Wei and Qin Kaihuai. A New Rendering Technology of GPU-Accelerated Radiosity[J]. Journal of Computer Research and Development, 2005, 42(6): 945-950.

Catalog

    Article views (159) PDF downloads (59) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return