• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Jiang Zetao, Huang Qinyang, Zhang Huijuan, Jin Xin, Huang Jingfan, Liao Peiqi. Unpaired Low-Light Image Enhancement Method Based on Global Consistency[J]. Journal of Computer Research and Development, 2025, 62(4): 876-887. DOI: 10.7544/issn1000-1239.202330904
Citation: Jiang Zetao, Huang Qinyang, Zhang Huijuan, Jin Xin, Huang Jingfan, Liao Peiqi. Unpaired Low-Light Image Enhancement Method Based on Global Consistency[J]. Journal of Computer Research and Development, 2025, 62(4): 876-887. DOI: 10.7544/issn1000-1239.202330904

Unpaired Low-Light Image Enhancement Method Based on Global Consistency

Funds: This work was supported by the National Natural Science Foundation of China (62172118), the Natural Science Key Foundation of Guangxi (2021GXNSFDA196002), the Project of Guangxi Key Laboratory of Image and Graphic Intelligent Processing (GIIP2302, GIIP2303, GIIP2304), and the Innovation Project of GUET Graduate Education (2023YCXS046).
More Information
  • Author Bio:

    Jiang Zetao: born in 1961. PhD, professor. His main research interests include image processing and computer vision

    Huang Qinyang: born in 1996. Master. His main research interest includes artificial intelligence

    Zhang Huijuan: born in 1996. Master. Her main research interests include big data analysis and recommendation systems

    Jin Xin: born in 1998. Master. His main research interests include image processing and computer vision

    Huang Jingfan: born in 1999. Master. His main research interests include image processing and computer vision

    Liao Peiqi: born in 1995. Master. His main research interests include image processing and computer vision

  • Received Date: November 08, 2023
  • Revised Date: August 11, 2024
  • Accepted Date: September 02, 2024
  • Available Online: December 11, 2024
  • Due to the expensive cost of production of paired images, unpaired low-light image enhancement methods are more practical as they do not rely on paired image data. However, their lack of detailed supervised signals leads to visual degradation problems such as global exposure inconsistencies, color distortions, and lots of noise in the output image, which makes them challenging for practical applications. We propose an unpaired low light enhancement method based on global consistency (GCLLE) to meet practical needs. Firstly, we remodel and fuse the same scale features of the encoder and decoder through the global consistency preserving module (GCPM) to correct the contextual information of different scales, to ensure the consistency of the global exposure adjustment and the global structural consistency of the output image, making the image light distribution uniform and avoiding the distortion; The local smoothing and modulation module (LSMM) is used to learn a set of local low-order curve mappings, which provides extended dynamic range and further improves the quality of the image to achieve realistic and natural enhancement; the proposed deep feature enhancement module (DFEM), which uses two-way pooling to fuse deep features, compresses irrelevant information and highlights more discriminative coded features, reducing inaccuracies and making it easier for the decoder to capture low-intensity signals in the image and retaining more details. Unlike pairwise enhancement, which focuses on the one-to-one mapping relationship between pixels in paired images, GCLLE enhances by reducing the stylistic differences between low-light and unpaired normal-light images. Through extensive experiments on MIT and LSRW datasets, the method proposed in this paper outperforms the classical low-light enhancement algorithms in several objective metrics, demonstrating the effectiveness and superiority of our method.

  • [1]
    Land E H. The Retinex theory of color vision[J]. Scientific American, 1977, 237(6): 108−129 doi: 10.1038/scientificamerican1277-108
    [2]
    Bychkovsky V, Paris S, Chan E, et al. Learning photographic global tonal adjustment with a database of input/output image pairs[C]//Proc of the 24th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2011: 97−104
    [3]
    Gharbi M, Chen Jiawen, Barron J T, et al. Deep bilateral learning for real-time image enhancement[J]. ACM Transactions on Graphics, 2017, 36(4): 1−12
    [4]
    Ignatov A, Kobyshev N, Timofte R, et al. DSLR-quality photos on mobile devices with deep convolutional networks[C]//Proc of the 16th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 3277−3285
    [5]
    Ren Wenqi, Liu Sifei, Ma Lin, et al. Low-light image enhancement via a deep hybrid network[J]. IEEE Transactions on Image Processing, 2019, 28(9): 4364−4375 doi: 10.1109/TIP.2019.2910412
    [6]
    Yan Zhicheng, Zhang Hao, Wang Baoyuan, et al. Automatic photo adjustment using deep neural networks[J]. ACM Transactions on Graphics, 2016, 35(2): 1−15
    [7]
    Wei Chen, Wang Wenjing, Yang Wenhan, et al. Deep Retinex decomposition for low-light enhancement[J]. arXiv preprint, arXiv: 1808.04560, 2018
    [8]
    Jiang Hai, Zhu Xuan, Ren Yang, et al. R2RNet: Low-light image enhancement via real-low to real-normal network[J]. Journal of Visual Communication and Image Representation, 2023, 90: 103712
    [9]
    江泽涛,覃露露,秦嘉奇,等. 一种基于MDARNet的低照度图像增强方法[J]. 软件学报,2021,32(12):3977−3991

    Jiang Zetao, Qin Lulu, Qin Jiaqi, et al. Low-light image enhancement method based on MDARNet[J]. Journal of Software, 2021, 32(12): 3977−3991(in Chinese)
    [10]
    Sun Xiaopeng, Li Muxingzi, He Tianyu, et al. Enhance images as you like with unpaired learning[J]. arXiv preprint, arXiv: 2110.01161, 2021
    [11]
    Zhu Junyan, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proc of the 16th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2223−2232
    [12]
    Ni Zhangkai, Yang Wenhan, Wang Shiqi, et al. Unpaired image enhancement with quality-attention generative adversarial network[C]//Proc of the 28th ACM Int Conf on Multimedia. New York: ACM, 2020: 1697−1705
    [13]
    Ni Zhangkai Yang Wenhan, Wang Hanli, et al. Cycle-interactive generative adversarial network for robust unsupervised low-light enhancement[C]//Proc of the 30th ACM Int Conf on Multimedia. New York: ACM, 2022: 1484−1492
    [14]
    Jiang Yifan, Gong Xinyu, Liu Ding, et al. EnlightenGAN: Deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340−2349 doi: 10.1109/TIP.2021.3051462
    [15]
    Ni Zhangkai, Yang Wenhan, Wang Shiqi, et al. Towards unsupervised deep image enhancement with generative adversarial network[J]. IEEE Transactions on Image Processing, 2020, 29: 9140−9151 doi: 10.1109/TIP.2020.3023615
    [16]
    Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proc of the 27th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2014: 2672−2680
    [17]
    Huang Yongsong, Jiang Zetao, Lan Rushi, et al. Infrared image super-resolution via transfer learning and PSRGAN[J]. IEEE Signal Processing Letters, 2021, 28: 982−986 doi: 10.1109/LSP.2021.3077801
    [18]
    Liu Mingyu, Breuel T, Kautz J. Unsupervised image-to-image translation networks[C]//Proc of the 31st Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2017: 700−708
    [19]
    Mao Xudong, Li Qing, Xie Haoran, et al. Least squares generative adversarial networks[C]//Proc of the 16th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2794−2802
    [20]
    Gulrajani I, Ahmed F, Arjovsky M, et al. Improved training of wasserstein GANs[C]//Proc of the 31st Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2017: 5769−5779
    [21]
    Lee C, Lee C, Kim C S. Contrast enhancement based on layered difference representation of 2D histograms[J]. IEEE Transactions on Image Processing, 2013, 22(12): 5372−5384 doi: 10.1109/TIP.2013.2284059
    [22]
    Thomas G, Flores-Tapia D, Pistorius S. Histogram specification: A fast and flexible method to process digital images[J]. IEEE Transactions on Instrumentation and Measurement, 2011, 60(5): 1565−1578 doi: 10.1109/TIM.2010.2089110
    [23]
    Lore K G, Akintayo A, Sarkar S. LLNet: A deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650−662 doi: 10.1016/j.patcog.2016.06.008
    [24]
    Wang Ruixing, Zhang Qing, Fu C W, et al. Underexposed photo enhancement using deep illumination estimation[C]//Proc of the 32nd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 6849−6857
    [25]
    Yang Wenhan, Wang Shiqi, Fang Yuming, et al. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement[C]//Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 3063−3072
    [26]
    Guo Chunle, Li Chongyi, Guo Jichang, et al. Zero-reference deep curve estimation for low-light image enhancement[C]//Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 1780−1789
    [27]
    Ma Long, Ma Tengyu, Liu Risheng, et al. Toward fast, flexible, and robust low-light image enhancement[C]//Proc of the 35th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 5637−5646
    [28]
    Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation[C]//Proc of the 18th Int Conf on Medical Image Computing and Computer-Assisted Intervention. Berlin: Springer, 2015: 234−241
    [29]
    Si Chenyang, Yu Weihao, Zhou Pan, et al. Inception transformer[C]//Proc of the 36th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2022: 23495−23509
    [30]
    Chen Jiawen, Adams A, Wadhwa N, et al. Bilateral guided upsampling[J]. ACM Transactions on Graphics, 2016, 35(6): 1−8
    [31]
    Jolicoeur-Martineau A. The relativistic discriminator: A key element missing from standard GAN[J]. arXiv preprint, arXiv: 1807.00734, 2018
    [32]
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint, arXiv: 1409.1556, 2014
    [33]
    RichardWebster B, Anthony S E, Scheirer W J. Psyphy: A psychophysics driven evaluation framework for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(9): 2280−2286
    [34]
    Deng Jia, Dong Wei, Socher R, et al. ImageNet: A large-scale hierarchical image database[C]//Proc of the 14th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2009: 248−255
    [35]
    Zhang Yonghua, Zhang Jiawan, Guo Xiaojie. Kindling the darkness: A practical low-light image enhancer[C]//Proc of the 27th ACM Int Conf on Multimedia. New York: ACM, 2019: 1632−1640
    [36]
    Liu Risheng, Ma Long, Zhang Jiaao, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement[C]//Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 10561−10570
    [37]
    Zhang R, Isola P, Efros A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proc of the 31st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 586−595
    [38]
    Talebi H, Milanfar P. NIMA: Neural image assessment[J]. IEEE Transactions on Image Processing, 2018, 27(8): 3998−4011 doi: 10.1109/TIP.2018.2831899
    [39]
    Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint, arXiv: 1412.6980, 2014
    [40]
    Loh Y P, Chan C S. Getting to know low-light images with the exclusively dark dataset[J]. Computer Vision and Image Understanding, 2019, 178: 30−42 doi: 10.1016/j.cviu.2018.10.010
    [41]
    Li Xiang, Wang Wenhai, Wu Lijun, et al. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection[C]//Proc of the 34th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2020: 21002−21012
  • Related Articles

    [1]Li Nan, Ding Yidong, Jiang Haoyu, Niu Jiafei, Yi Ping. Jailbreak Attack for Large Language Models: A Survey[J]. Journal of Computer Research and Development, 2024, 61(5): 1156-1181. DOI: 10.7544/issn1000-1239.202330962
    [2]Chen Xuanting, Ye Junjie, Zu Can, Xu Nuo, Gui Tao, Zhang Qi. Robustness of GPT Large Language Models on Natural Language Processing Tasks[J]. Journal of Computer Research and Development, 2024, 61(5): 1128-1142. DOI: 10.7544/issn1000-1239.202330801
    [3]Shu Wentao, Li Ruixiao, Sun Tianxiang, Huang Xuanjing, Qiu Xipeng. Large Language Models: Principles, Implementation, and Progress[J]. Journal of Computer Research and Development, 2024, 61(2): 351-361. DOI: 10.7544/issn1000-1239.202330303
    [4]Yang Yi, Li Ying, Chen Kai. Vulnerability Detection Methods Based on Natural Language Processing[J]. Journal of Computer Research and Development, 2022, 59(12): 2649-2666. DOI: 10.7544/issn1000-1239.20210627
    [5]Pan Xuan, Xu Sihan, Cai Xiangrui, Wen Yanlong, Yuan Xiaojie. Survey on Deep Learning Based Natural Language Interface to Database[J]. Journal of Computer Research and Development, 2021, 58(9): 1925-1950. DOI: 10.7544/issn1000-1239.2021.20200209
    [6]Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling. Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing[J]. Journal of Computer Research and Development, 2021, 58(8): 1727-1750. DOI: 10.7544/issn1000-1239.2021.20210304
    [7]Pan Xudong, Zhang Mi, Yan Yifan, Lu Yifan, Yang Min. Evaluating Privacy Risks of Deep Learning Based General-Purpose Language Models[J]. Journal of Computer Research and Development, 2021, 58(5): 1092-1105. DOI: 10.7544/issn1000-1239.2021.20200908
    [8]Bao Yang, Yang Zhibin, Yang Yongqiang, Xie Jian, Zhou Yong, Yue Tao, Huang Zhiqiu, Guo Peng. An Automated Approach to Generate SysML Models from Restricted Natural Language Requirements in Chinese[J]. Journal of Computer Research and Development, 2021, 58(4): 706-730. DOI: 10.7544/issn1000-1239.2021.20200757
    [9]Yu Kai, Jia Lei, Chen Yuqiang, and Xu Wei. Deep Learning: Yesterday, Today, and Tomorrow[J]. Journal of Computer Research and Development, 2013, 50(9): 1799-1804.
    [10]Che Haiyan, Feng Tie, Zhang Jiachen, Chen Wei, and Li Dali. Automatic Knowledge Extraction from Chinese Natural Language Documents[J]. Journal of Computer Research and Development, 2013, 50(4): 834-842.

Catalog

    Article views (92) PDF downloads (44) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return