Abstract:
To break the performance bottleneck caused by traditional fusion rules, an infrared and visible image fusion network based on multiclassification adversarial mechanism in the feature space is proposed. Compared with existing methods, the proposed method has more reasonable fusion rule and better performance. First, an autoencoder introducing attention mechanism is trained to perform the feature extraction and image reconstruction. Then, the generative adversarial network (GAN) is adopted to learn the fusion rule in the feature space extracted by the trained encoder. Specifically, we design a fusion network as the generator to fuse the features extracted from source images, and then design a multi-classifier as the discriminator. The multiclassification adversarial learning can make the fused features approximate both infrared and visible probability distribution at the same time, so as to preserve the most salient characteristics in source images. Finally, the fused image is reconstructed from the fused features by the trained decoder. Qualitative experiments show that the proposed method is in subjective evaluation better than all state-of-the-art infrared and visible image fusion methods, such as GTF, MDLatLRR, DenseFuse, FusionGAN and U2Fusion. In addition, the objective evaluation shows that the number of best quantitative metrics of our method is 2 times that of U2Fusion, and the fusion speed is more than 5 times that of other comparative methods.