Abstract:
Infrared images can distinguish targets from their backgrounds due to the difference in thermal radiation even in poor lighting conditions. By contrast, visible images can represent texture details with high spatial resolution. Meanwhile, both of infrared and visible images preserve corresponding semantic information. Therefore, infrared and visible image fusion should keep both radiation information of the infrared image and texture details of the visible image; additionally, it needs to reserve the semantic information of both. Semantic segmentation can transform the source images into the masks with semantic information. In this paper, an infrared and visible image fusion method is proposed based on semantic segmentation. It can overcome the shortcomings that the existing fusion methods are not specific to different regions. Considering the specific information for different regions of infrared and visible images, we design two loss functions for different regions to improve the quality of fused image under the framework of generative adversarial network. Firstly, we gain the masks of the infrared images with semantic information by semantic segmentation; then we use the masks to divide the infrared and visible images into infrared target area, infrared background area, visible target area, and visible background area. Secondly, we employ different methods to fuse the target and background area, respectively. Finally, we combine the two regions to obtain the final fused image. The experiment shows that the proposed method outperforms state-of-the-art, where our results have higher contrast in the target area and richer texture details in the background area.