高级检索
    张浩, 马佳义, 樊凡, 黄珺, 马泳. 基于特征空间多分类对抗机制的红外与可见光图像融合[J]. 计算机研究与发展, 2023, 60(3): 690-704. DOI: 10.7544/issn1000-1239.202110639
    引用本文: 张浩, 马佳义, 樊凡, 黄珺, 马泳. 基于特征空间多分类对抗机制的红外与可见光图像融合[J]. 计算机研究与发展, 2023, 60(3): 690-704. DOI: 10.7544/issn1000-1239.202110639
    Zhang Hao, Ma Jiayi, Fan Fan, Huang Jun, Ma Yong. Infrared and Visible Image Fusion Based on Multiclassification Adversarial Mechanism in Feature Space[J]. Journal of Computer Research and Development, 2023, 60(3): 690-704. DOI: 10.7544/issn1000-1239.202110639
    Citation: Zhang Hao, Ma Jiayi, Fan Fan, Huang Jun, Ma Yong. Infrared and Visible Image Fusion Based on Multiclassification Adversarial Mechanism in Feature Space[J]. Journal of Computer Research and Development, 2023, 60(3): 690-704. DOI: 10.7544/issn1000-1239.202110639

    基于特征空间多分类对抗机制的红外与可见光图像融合

    Infrared and Visible Image Fusion Based on Multiclassification Adversarial Mechanism in Feature Space

    • 摘要: 为突破传统融合规则带来的性能瓶颈,提出一个基于特征空间多类别对抗机制的红外与可见光图像融合网络. 相较于现存方法,其融合规则更合理且性能更好. 首先,训练一个引入注意力机制的自编码器网络实现特征提取和图像重建. 然后,采用生成式对抗网络(generative adversarial network, GAN)在训练好的编码器提取的特征空间上进行融合规则的学习. 具体来说,设计一个特征融合网络作为生成器融合从源图像中提取的特征,然后将一个多分类器作为鉴别器. 这种多分类对抗学习可使得融合特征同时逼近红外和可见光2种模态的概率分布,从而保留源图像中最显著的特征. 最后,使用训练好的译码器从特征融合网络输出的融合特征重建出融合图像. 实验结果表明:与最新的所有主流红外与可见光图像融合方法包括GTF, MDLatLRR, DenseFuse, FusionGAN, U2Fusion相比,所提方法主观效果更好,客观指标最优个数为U2Fusion的2倍,融合速度是其他方法的5倍以上.

       

      Abstract: To break the performance bottleneck caused by traditional fusion rules, an infrared and visible image fusion network based on multiclassification adversarial mechanism in the feature space is proposed. Compared with existing methods, the proposed method has more reasonable fusion rule and better performance. First, an autoencoder introducing attention mechanism is trained to perform the feature extraction and image reconstruction. Then, the generative adversarial network (GAN) is adopted to learn the fusion rule in the feature space extracted by the trained encoder. Specifically, we design a fusion network as the generator to fuse the features extracted from source images, and then design a multi-classifier as the discriminator. The multiclassification adversarial learning can make the fused features approximate both infrared and visible probability distribution at the same time, so as to preserve the most salient characteristics in source images. Finally, the fused image is reconstructed from the fused features by the trained decoder. Qualitative experiments show that the proposed method is in subjective evaluation better than all state-of-the-art infrared and visible image fusion methods, such as GTF, MDLatLRR, DenseFuse, FusionGAN and U2Fusion. In addition, the objective evaluation shows that the number of best quantitative metrics of our method is 2 times that of U2Fusion, and the fusion speed is more than 5 times that of other comparative methods.

       

    /

    返回文章
    返回