高级检索
    宋霄罡, 胡浩越, 梁莉, 鲁晓锋, 黑新宏. 联合语义分割的自监督单目深度估计方法[J]. 计算机研究与发展. DOI: 10.7544/issn1000-1239.202330485
    引用本文: 宋霄罡, 胡浩越, 梁莉, 鲁晓锋, 黑新宏. 联合语义分割的自监督单目深度估计方法[J]. 计算机研究与发展. DOI: 10.7544/issn1000-1239.202330485
    Self-supervised monocular depth estimation method for joint semantic[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330485
    Citation: Self-supervised monocular depth estimation method for joint semantic[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330485

    联合语义分割的自监督单目深度估计方法

    Self-supervised monocular depth estimation method for joint semantic

    • 摘要: 本文研究深度估计和语义分割的图像之间的互利关系,提出了一种联合语义分割的自监督单目深度估计方法USegDepth.语义分割和深度估计任务通过共享编码器,实现语义引导.为了进一步提高编码器的跨多任务性能,设计了多任务特征提取模块,堆叠该模块构成共享编码器,解决有限感受野和缺乏跨通道交互导致的模型特征表示能力欠佳问题,进一步提升模型精度.同时,提出跨任务交互模块,通过双向的跨域信息交互,细化特征表示,提升深度估计表现,特别是光度一致性监督有限的弱纹理区域和物体边界.通过在KITTI数据集上的训练和全面评估,实验结果显示USegDepth的均方相对误差相比于SGDepth降低了0.176个百分点,在阈值为1.25^3的阈值精度达到了98.4%,证明了USegDepth在深度预测上具有较高的准确性.

       

      Abstract: In this paper, the mutually beneficial relationship between depth estimation and semantic segmentation is investigated, and proposed a self-supervised monocular depth estimation method for joint semantic segmentation USegDepth. The shared encoder for semantic segmentation and depth estimation is implemented to achieve semantic guidance. To further improve the across multiple tasks performance of the encoder, a multi-task feature extraction module was designed, which is stacked to generate the shared encoder, solving the poor feature representation problem of the model due to limited receptive field and lack of cross-channel interaction, further improves the model accuracy. And a cross-task interaction module is proposed for bidirectional cross-domain information interaction to refine the depth features, improving depth estimation performance, especially in weak texture regions and object boundaries with limited luminosity consistency supervision. Through training and evaluation on the KITTI dataset, the experimental results show that the mean square relative error of the USegDepth is reduced by 0.176 percentage points compared to SGDepth, and the threshold accuracy reaches 98.4% at a threshold value of 1.25^3, proving the high accuracy of the USegDepth in depth prediction.

       

    /

    返回文章
    返回