Abstract:
Image cross-domain transformation, also known as image translation, is a technology that aims to transform the images of the source domain into the ones of the target domain. Specifically, the converted images have the style of the target domain images (contour, posture, etc.) while maintaining the structure of the source domain images (texture, color, etc.). Image cross-domain transformation technology is widely used in the field of vision, such as photo editing and video special effects production. In recent years, this technology has developed rapidly based on deep learning, especially the generation of adversarial networks, and achieved impressive results. However, there are still problems, including the collapse of color mode and the inability to maintain the content structures in the transformed images. To solve the above problems, we propose an image cross-domain transformation algorithm based on self-similarity and contrastive learning. The algorithm uses the pre-trained deep neural network model to extract the content and style features of the images and takes the perceptual loss and the loss based on self-similarity as the image content loss function. At the same time, a loose optimal transport loss and the moment matching loss are used as the image style loss function to train the proposed neural network, and the transformed images and the target domain images are marked as positive sample pairs, and the translated images and the source domain images are marked as negative samples for contrastive learning. The proposed algorithm is verified by experiments on four data sets. The results show that the proposed method maintains the content structure of the source domain images, reduces the mode collapse of color, and makes the style of the translated images more consistent with that of the guidance images.