Advanced Search
    Jiang Zetao, Zhu Wencai, Jin Xin, Liao Peiqi, Huang Jingfan. An Image Captioning Method Based on DSC-Net[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330523
    Citation: Jiang Zetao, Zhu Wencai, Jin Xin, Liao Peiqi, Huang Jingfan. An Image Captioning Method Based on DSC-Net[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330523

    An Image Captioning Method Based on DSC-Net

    • As visual features closer to the text domain, the grid features extracted by the CLIP (contrastive language-image pre-training) image encoder is easy to convert into the corresponding semantic natural language, which can alleviate the semantic gap problem, so it may become an important source of visual features in the image captioning in the future. However, this method does not consider that the division of image content may cause a complete object to be divided into several grids. The segmentation of the object will inevitably lead to the lack of a complete expression of the object information in the feature extraction results, and further lead to the lack of an accurate expression of the object and the relationship between the object in the generated sentence. Aiming at the phenomenon of grid features extracted by CLIP image encoder, we propose dual semantic collaborative network (DSC-Net) for image captioning. Specifically, dual semantic collaborative self-attention (DSCS) module is first proposed to enhance the expression of object information by CLIP grid features. Then dual semantic collaborative cross-attention (DSCC) module is proposed to integrate semantic information between grid and object to generate visual features, and use it to predict sentences. Finally, dual semantic fusion (DSF) module is proposed to provide region-oriented fusion features for the above two semantic cooperation modules, and to solve the problem of correlation conflicts that may arise in the process of semantic cooperation. After a large number of experiments on the COCO dataset, the model proposed achieved a CIDEr score of 138.5% on the offline test set divided by Karpathy et al., and a CIDEr score of 137.6% in the official online test. Compared with the current mainstream image captioning method, this result has obvious advantages.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return