Advanced Search
    Fu Tao, Chen Zhaojiong, Ye Dongyi. GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting[J]. Journal of Computer Research and Development, 2022, 59(12): 2816-2830. DOI: 10.7544/issn1000-1239.20210830
    Citation: Fu Tao, Chen Zhaojiong, Ye Dongyi. GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting[J]. Journal of Computer Research and Development, 2022, 59(12): 2816-2830. DOI: 10.7544/issn1000-1239.20210830

    GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting

    • Some extrapolation methods for Chinese landscape painting based on generative adversarial network is proposed in this paper. Existing image extrapolation methods are mainly designed for natural images with large-scale regions containing same objective in each one and with standardized textures, such as grass and sky. They often suffer from blur and boundary semantic inconsistency in extrapolated regions when they are applied to Chinese landscape painting that have complex details, rich gradations and various strokes. To address those problems, a new bidirectional decoding feature fusion network based on generative adversarial network (BDFF-GAN) is proposed. The generator, named UY-Net, is designed with the architecture of U-Net and a multi-scale decoder, which can achieve the function of bidirectional decoding features fusion. Features from different layers of the encoder are assigned to corresponding layers of the multi-scale decoder, where the first-stage feature fusion is achieved by concatenation operations and therefore the connections between features of different scales are enhanced. On the other hand, decoded features from U-Net part and the multi-scale decoder part at same scales are fused by skipping connections to further improve the performance of the generator. Benefiting from the subtle architecture, UY-Net can perform well at semantic features and stroke transmission as well as learning. Moreover, multi-discriminator strategy is adopted in our method. A global discriminator takes the whole result image as the input to control the global consistency, and a local discriminator takes the patch from the junction of source image part and extrapolated part as the input to improve the coherence and details. Experimental results show that BDFF-GAN performs well at semantic features and textures learning with regards to landscape paintings and outperforms existing methods in terms of the semantic content coherence and the naturalness of texture structure with regards to strokes. In addition, we provide an interface that allows users to control the outline of the extrapolated part by boundary guide lines, which achieves the controllability for the layout of extrapolated part and expands the generation diversity and application interactivity of BDFF-GAN.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return