Advanced Search
    Zhao Lei, Zhang Huiming, Xing Wei, Lin Zhijie, Lin Huaizhong, Lu Dongming, Pan Xun, Xu Duanqing. Image Cross-Domain Translation Algorithm Based on Self-Similarity and Contrastive Learning[J]. Journal of Computer Research and Development, 2023, 60(4): 930-946. DOI: 10.7544/issn1000-1239.202220039
    Citation: Zhao Lei, Zhang Huiming, Xing Wei, Lin Zhijie, Lin Huaizhong, Lu Dongming, Pan Xun, Xu Duanqing. Image Cross-Domain Translation Algorithm Based on Self-Similarity and Contrastive Learning[J]. Journal of Computer Research and Development, 2023, 60(4): 930-946. DOI: 10.7544/issn1000-1239.202220039

    Image Cross-Domain Translation Algorithm Based on Self-Similarity and Contrastive Learning

    • Image cross-domain transformation, also known as image translation, is a technology that aims to transform the images of the source domain into the ones of the target domain. Specifically, the converted images have the style of the target domain images (contour, posture, etc.) while maintaining the structure of the source domain images (texture, color, etc.). Image cross-domain transformation technology is widely used in the field of vision, such as photo editing and video special effects production. In recent years, this technology has developed rapidly based on deep learning, especially the generation of adversarial networks, and achieved impressive results. However, there are still problems, including the collapse of color mode and the inability to maintain the content structures in the transformed images. To solve the above problems, we propose an image cross-domain transformation algorithm based on self-similarity and contrastive learning. The algorithm uses the pre-trained deep neural network model to extract the content and style features of the images and takes the perceptual loss and the loss based on self-similarity as the image content loss function. At the same time, a loose optimal transport loss and the moment matching loss are used as the image style loss function to train the proposed neural network, and the transformed images and the target domain images are marked as positive sample pairs, and the translated images and the source domain images are marked as negative samples for contrastive learning. The proposed algorithm is verified by experiments on four data sets. The results show that the proposed method maintains the content structure of the source domain images, reduces the mode collapse of color, and makes the style of the translated images more consistent with that of the guidance images.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return