高级检索

    基于类别相关的领域自适应交通图像语义分割方法

    A Semantic Segmentation Method of Traffic Scene Based on Categories-Aware Domain Adaptation

    • 摘要: 图像语义分割作为机器视觉领域一个重要研究问题,其目的是对一张彩色图像中的每个像素点进行分类,将图像中每个像素预测其对应的语义标签.现有的基于全监督学习的语义分割方法依赖于精准标注的训练样本;现有的基于弱监督、半监督学习的分割方法虽然可以融入未标记样本,但由于缺少对空间语义信息的有效利用,常出现语义不一致或类别错分现象,且难以直接应用于其他的跨域无标注数据集.针对跨域无标注数据集语义分割问题,提出一种基于领域自适应的图像语义分割方法.其中,提出的方法首先通过采用优化上采样方法和提出基于focal loss的损失函数,有效改进了现有方法中数据量较小的类别难以被正确分割的问题;其次,通过有效利用所提出的类别相关的领域自适应方法,来解决不同数据集跨域语义分割问题,使无标注图像的语义分割平均交并比较现有方法的均值提升6%.提出的方法在5个数据集上进行验证实验,实验结果充分表明了方法的有效性和泛化性.

       

      Abstract: As a basic and crucial research issue in the field of machine vision, image semantic segmentation aims to classify every pixel in a color image and predict its corresponding semantic label. Most of the existing semantic segmentation methods are all supervised learning models that are excessively dependent on the given per-pixel annotations. Although existing segmentation methods based on weakly supervision and semi-supervision learning can be integrated into unlabeled samples, semantic category mis-classification often occurs due to the lack of effective utilization of spatial semantic information, and it is difficult to directly apply to other cross-domain unlabeled data sets. In order to solve those problems, this paper proposes a semantic segmentation method based on categories-aware domain adaptation for cross-domain unlabeled data sets. Firstly, the proposed method adopts the optimized upsampling method and proposed a new loss function based on focal loss, which is an effective solution to the problem that it is very difficult to segment the categories with small data volume in the existing methods. Secondly, a categories-aware domain adaptation method is proposed to improve the mIoU of semantic segmentation of unlabeled images of target domain by 6%, compared with the state-of-the-art methods. The proposed method is verified on five data sets, and the experimental results fully demonstrate the effectiveness and generalization of the proposed method.

       

    /

    返回文章
    返回