高级检索

    基于全局一致的非配对低照度图像增强方法

    Unpaired Low-Light Image Enhancement Based on Global Consistency

    • 摘要: 由于低照度配对图像的制作成本昂贵且难于制作,而非配对低照度图像增强方法不依赖配对图像数据因而更有实用价值,但其缺乏详细的监督信号导致输出图像存在全局曝光不一致、色彩失真和大量噪声等视觉退化问题,在实际应用中存在挑战. 为了更好地满足实用需求,提出一种基于全局一致的非配对低照度增强方法(unpaired low-light enhancement based on global consistency,GCLLE). 首先,该方法通过全局一致性保持模块(global consistency preserving module,GCPM)将编码器和解码器中相同尺度的特征重新建模并融合以矫正不同尺度的上下文信息,保证输出图像全局曝光调整一致性和全局结构一致性,使得图像亮度分布均匀并避免扭曲和失真;利用局部平滑和调制模块(local smoothing and modulation module,LSMM)学习一组局部的低阶曲线映射,为图像提供更宽的动态范围并进一步提高质量,实现真实和自然的增强效果;提出使用双路池化融合深层特征的深度特征强化模块(deep feature enhancement module,DFEM)压缩无关信息并突出更有区分度的编码特征,减少了不准确信息并使得解码器更容易捕获图像中的低强度信号,保留图像更多细节. 不同于关注配对图像像素间1对1映射关系的配对增强方法,GCLLE通过缩小低照度图像与非配对正常照度图像之间的风格差异实现增强. 在MIT和LSRW数据集上进行大量的实验,结果表明所提方法在多个客观指标上超过了现有典型低照度增强方法,具有更好的增强效果.

       

      Abstract: Due to the expensive cost of production of paired images, unpaired low-light image enhancement methods are more practical as they do not rely on paired image data. However, their lack of detailed supervised signals leads to visual degradation problems such as global exposure inconsistencies, color distortions, and lots of noise in the output image, which makes them challenging for practical applications. We propose an unpaired low light enhancement method based on global consistency (GCLLE) to meet practical needs. Firstly, we remodel and fuse the same scale features of the encoder and decoder through the Global Consistency Preserving Module (GCPM) to correct the contextual information of different scales, to ensure the consistency of the global exposure adjustment and the global structural consistency of the output image, making the image light distribution uniform and avoiding the distortion; The Local Smoothing and Modulation Module (LSMM) is used to learn a set of local low-order curve mappings, which provides extended dynamic range and further improves the quality of the image to achieve realistic and natural enhancement; The proposed Deep Feature Enhancement Module (DFEM), which uses two-way pooling to fuse deep features, compresses irrelevant information and highlights more discriminative coded features, reducing inaccuracies and making it easier for the decoder to capture low-intensity signals in the image and retaining more details. Unlike pairwise enhancement, which focuses on the one-to-one mapping relationship between pixels in paired images, GCLLE enhances by reducing the stylistic differences between low-light and unpaired normal-light images. Through extensive experiments on MIT and LSRW datasets, the method proposed in this paper outperforms the classical low-light enhancement algorithms in several objective metrics, demonstrating the effectiveness and superiority of our method.

       

    /

    返回文章
    返回