Unpaired Low-Light Image Enhancement Based on Global Consistency
-
Graphical Abstract
-
Abstract
Due to the expensive cost of production of paired images, unpaired low-light image enhancement methods are more practical as they do not rely on paired image data. However, their lack of detailed supervised signals leads to visual degradation problems such as global exposure inconsistencies, color distortions, and lots of noise in the output image, which makes them challenging for practical applications. We propose an unpaired low light enhancement method based on global consistency (GCLLE) to meet practical needs. Firstly, we remodel and fuse the same scale features of the encoder and decoder through the Global Consistency Preserving Module (GCPM) to correct the contextual information of different scales, to ensure the consistency of the global exposure adjustment and the global structural consistency of the output image, making the image light distribution uniform and avoiding the distortion; The Local Smoothing and Modulation Module (LSMM) is used to learn a set of local low-order curve mappings, which provides extended dynamic range and further improves the quality of the image to achieve realistic and natural enhancement; The proposed Deep Feature Enhancement Module (DFEM), which uses two-way pooling to fuse deep features, compresses irrelevant information and highlights more discriminative coded features, reducing inaccuracies and making it easier for the decoder to capture low-intensity signals in the image and retaining more details. Unlike pairwise enhancement, which focuses on the one-to-one mapping relationship between pixels in paired images, GCLLE enhances by reducing the stylistic differences between low-light and unpaired normal-light images. Through extensive experiments on MIT and LSRW datasets, the method proposed in this paper outperforms the classical low-light enhancement algorithms in several objective metrics, demonstrating the effectiveness and superiority of our method.
-
-