高级检索

    联合语义分割的自监督单目深度估计方法

    Self-Supervised Monocular Depth Estimation Method for Joint Semantic Segmentation

    • 摘要: 研究深度估计和语义分割的图像之间的互利关系,提出了一种联合语义分割的自监督单目深度估计方法USegDepth. 语义分割和深度估计任务通过共享编码器,实现语义引导. 为了进一步提高编码器的跨多任务性能,设计了多任务特征提取模块,堆叠该模块构成共享编码器,解决有限感受野和缺乏跨通道交互导致的模型特征表示能力欠佳问题,进一步提升模型精度. 同时,提出跨任务交互模块,通过双向的跨域信息交互细化特征表示,提升深度估计表现,特别是光度一致性监督有限的弱纹理区域和物体边界. 通过在KITTI数据集上的训练和全面评估,实验结果显示所提的USegDepth模型方法的均方相对误差相比于SGDepth降低了0.176个百分点,在阈值为1.253的阈值精度达到了98.4%,证明了USegDepth在深度预测上具有较高的准确率.

       

      Abstract: In this paper, the mutually beneficial relationship between depth estimation and semantic segmentation is investigated, and a self-supervised monocular depth estimation method for joint semantic segmentation USegDepth is proposed. The shared encoder for semantic segmentation and depth estimation is implemented to achieve semantic guidance. To further improve the across multiple tasks performance of the encoder, a multi-task feature extraction module is designed. The module is stacked to generate the shared encoder, solving the poor feature representation problem of the model due to limited receptive field and lack of cross-channel interaction, and the model accuracy is improved further. And a cross-task interaction module is proposed for bidirectional cross-domain information interaction to refine the depth features, improving depth estimation performance, especially in weak texture regions and object boundaries with limited luminosity consistency supervision. Through training and evaluation on KITTI dataset, the experimental results show that the mean square relative error of USegDepth is reduced by 0.176 percentage points compared with that of SGDepth, and the threshold accuracy reaches 98.4% at a threshold value of 1.253, proving the high accuracy of USegDepth in depth prediction.

       

    /

    返回文章
    返回