Volume 32 Issue 4
Jul.  2023
Turn off MathJax
Article Contents
DENG Jiawei, YU Zhenming, PANG Guangyao, “Colour Variation Minimization Retinex Decomposition and Enhancement with a Multi-Branch Decomposition Network,” Chinese Journal of Electronics, vol. 32, no. 4, pp. 908-919, 2023, doi: 10.23919/cje.2021.00.350
Citation: DENG Jiawei, YU Zhenming, PANG Guangyao, “Colour Variation Minimization Retinex Decomposition and Enhancement with a Multi-Branch Decomposition Network,” Chinese Journal of Electronics, vol. 32, no. 4, pp. 908-919, 2023, doi: 10.23919/cje.2021.00.350

Colour Variation Minimization Retinex Decomposition and Enhancement with a Multi-Branch Decomposition Network

doi: 10.23919/cje.2021.00.350
Funds:  This work was supported by the Guangxi Innovation-Driven Development Special Fund Project (Guike AA18118036), the National Natural Science Foundation of China (62002268, 62262059), and the Natural Science Foundation of Guangxi Province (2021JJA170178, 2021JJB170060)
More Information
  • Author Bio:

    Jiawei DENG received the M.S degree from Guilin University of Electronic Technology. His research interests include deep learning and image processing. (Email: accd920@foxmail.com)

    Zhenming YU (corresponding author) is a Professor and Ph.D. in Wuzhou University. He is the Deputy Director in the DSP Expert Committee of the Chinese Institute of Electronics, Vice Chairman of Guangxi Artificial Intelligence Society, and Vice Chairman of Guangxi Optical Society. His main research interests focus on image processing and motor control. (Email: 24032784@qq.com)

    Guangyao PANG is currently an Associate Professor with the Guangxi Key Laboratory of Machine Vision and Intelligent Control, the School of Data Science and Software Engineering, Wuzhou University, China. His main research interests include deep learning, natural language processing, recommendation system, and transfer learning. (Email: pangguangyao@snnu.edu.cn)

  • Received Date: 2022-07-03
  • Accepted Date: 2023-01-09
  • Available Online: 2023-01-14
  • Publish Date: 2023-07-05
  • This paper proposes a colour variation minimization retinex decomposition and enhancement with a multi-branch decomposition network (CvmD-net) to remove single image darkness. The network overcomes the problem that retinex deep learning model relies on matching bright images to process dark images. Specifically, our method takes two stages to light up the darkness in initial images: image decomposition and brightness optimization. We propose an input constant feature prior mechanism (ICFP) based on reflection constant features. The mechanism extracts structure and colour from the input images and constrains the reflected images output from the decomposition model to reduce color distortion and artifacts. The noise amplification during decomposition is addressed by a multi-branch decomposition network. Sub-networks with different structures are employed to focus on different prediction tasks. This paper proposes a reference mechanism for input brightness. This mechanism optimizes the output brightness distribution by calculating the reference brightness of the dark images. Experimental results on two benchmark datasets, namely, LOL and ZeroDCE, demonstrate that the proposed method can better balance dense noise interference and colour restoration. For the evaluation on real images, we collect Skynet images at night to verify the performance of the proposed approach. Compared with the state-of-the-art non-reference retinex decomposition-enhancement models, this paper has the best brightness optimization.
  • loading
  • [1]
    J. Huang, P. F. Zhu, M. R. Geng, et al., “Range scaling global U-Net for perceptual image enhancement on mobile devices,” in Proceedings of the European Conference on Computer Vision, Munich, Germany, pp.230–242, 2018.
    [2]
    K. Singh, A. Seth, H. S. Sandhu, et al., “A comprehensive review of convolutional neural network based image enhancement techniques,” in 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, pp.1–6, 2019.
    [3]
    C. Chen, Q. F. Chen, M. Do, et al., “Seeing motion in the dark,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp.3184–3193, 2019.
    [4]
    G. Z. Tiron and M. S. Poboroniuc, “Neural network based traffic sign recognition for autonomous driving,” in 2019 International Conference on Electromechanical and Energy Systems (SIELMEN), Craiova, Romania, pp.1–5, 2019.
    [5]
    Y. T. Kim, “Contrast enhancement using brightness preserving bi-histogram equalization,” IEEE Transactions on Consumer Electronics, vol.43, no.1, pp.1–8, 1997. doi: 10.1109/30.580378
    [6]
    M. Abdullah-Al-Wadud, H. Kabir, M. A. A. Dewan, et al., “A dynamic histogram equalization for image contrast enhancement,” in 2007 Digest of Technical Papers International Conference on Consumer Electronics, Las Vegas, NV, USA, pp.1–2, 2007.
    [7]
    D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Transactions on Image Processing, vol.6, no.3, pp.451–462, 1997. doi: 10.1109/83.557356
    [8]
    R. Bhadu, R. Sharma, S. K. Soni, et al., “Sparse representation and homomorphic filter for capsule endoscopy image enhancement,” in 2017 International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, pp.1178–1182, 2017.
    [9]
    M. Gharbi, J. W. Chen, J. T. Barron, et al., “Deep bilateral learning for real-time image enhancement,” ACM Transactions on Graphics, vol.36, no.4, article no.articleno.118, 2017. doi: 10.1145/3072959.3073592
    [10]
    Y. F. Jiang, X. Y. Gong, D. Liu, et al., “EnlightenGAN: Deep light enhancement without paired supervision,” IEEE Transactions on Image Processing, vol.30, pp.2340–2349, 2021. doi: 10.1109/TIP.2021.3051462
    [11]
    Y. Zhang, X. G. Di, B. Zhang, et al., Self-supervised image enhancement network: Training with low light images only, arXiv preprint arXiv: 2002.11300, 2020, doi: 10.48550/arXiv.2002.11300.
    [12]
    C. L. Guo, C. Y. Li, J. C. Guo, et al., “Zero-reference deep curve estimation for low-light image enhancement,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp.1777–1786, 2020.
    [13]
    S. K. Naik and C. A. Murthy, “Hue-preserving color image enhancement without gamut problem,” IEEE Transactions on Image Processing, vol.12, no.12, pp.1591–1598, 2003. doi: 10.1109/TIP.2003.819231
    [14]
    T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,” IEEE Transactions on Image Processing, vol.20, no.12, pp.3431–3441, 2011. doi: 10.1109/TIP.2011.2157513
    [15]
    C. Lee, C. Lee, and C. S. Kim, “Contrast enhancement based on layered difference representation of 2D histograms,” IEEE Transactions on Image Processing, vol.22, no.12, pp.5372–5384, 2013. doi: 10.1109/TIP.2013.2284059
    [16]
    X. Y. Fu, D. L. Zeng, Y. Huang, et al., “A weighted variational model for simultaneous reflectance and illumination estimation,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp.2782–2790, 2016.
    [17]
    X. J. Guo, Y. Li, and H. B. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol.26, no.2, pp.982–993, 2017. doi: 10.1109/TIP.2016.2639450
    [18]
    M. D. Li, J. Y. Liu, W. H. Yang, et al., “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Transactions on Image Processing, vol.27, no.6, pp.2828–2841, 2018. doi: 10.1109/TIP.2018.2810539
    [19]
    M. Tiwari and B. Gupta, “Brightness preserving contrast enhancement of medical images using adaptive gamma correction and homomorphic filtering,” in 2016 IEEE Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, pp.1–4, 2016.
    [20]
    R. P. Luan, “An enhancement algorithm for non-uniform illumination image based on two homomorphic filters,” in 2016 2nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA 2016), Dalian, China, pp.1213–1216, 2016.
    [21]
    C. Wei, W. J. Wang, W. H. Yang, et al., “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference 2018, Newcastle, UK, pp.155–167, 2018.
    [22]
    W. J. Wang, C. Wei, W. H. Yang, et al., “GLADNet: low-light enhancement network with global awareness,” in 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi'an, China, pp.751–755, 2018.
    [23]
    C. Chen, Q. F. Chen, J. Xu, et al., “Learning to see in the dark,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp.3291–3300, 2018.
    [24]
    J. Y. Wang, W. M. Tan, X. J. Niu, et al., “RDGAN: Retinex decomposition based adversarial learning for low-light enhancement,” in 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, pp.1186–1191, 2019.
    [25]
    A. Ignatov, N. Kobyshev, R. Timofte, et al., “WESPE: Weakly supervised photo enhancer for digital cameras,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, pp.691–700, 2018.
    [26]
    Y. S. Chen, Y. C. Wang, M. H. Kao, et al., “Deep photo enhancer: unpaired learning for image enhancement from photographs with GANs,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp.6306–6314, 2018.
    [27]
    F. L. Luo, Z. H. Zou, J. M. Liu, et al., “Dimensionality reduction and classification of hyperspectral image via multistructure unified discriminative embedding,” IEEE Transactions on Geoscience and Remote Sensing, vol.60, article no.5517916, 2022. doi: 10.1109/TGRS.2021.3128764
    [28]
    G. Q. Qi, L. Chang, Y. Q. Luo, et al., “A precise multi-exposure image fusion method based on low-level features,” Sensors, vol.20, no.6, article no.1597, 2020. doi: 10.3390/s20061597
    [29]
    Z. Q. Zhu, H. Y. Wei, G. Hu, et al., “A novel fast single image dehazing algorithm based on artificial multiexposure image fusion,” IEEE Transactions on Instrumentation and Measurement, vol.70, article no.5001523, 2021. doi: 10.1109/TIM.2020.3024335
    [30]
    O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), Munich, Germany, pp.234–241, 2015.
    [31]
    C. Y. Li, J. C. Guo, F. Porikli, et al., “LightenNet: A convolutional neural network for weakly illuminated image enhancement,” Pattern Recognition Letters, vol.104, pp.15–22, 2018. doi: 10.1016/j.patrec.2018.01.010
    [32]
    B. L. Cai, X. M. Xu, K. L. Guo, et al., “A joint intrinsic-extrinsic prior model for Retinex,” in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp.4020–4029, 2017.
    [33]
    F. L. Luo, H. Huang, Z. Z. Ma, et al., “Semisupervised sparse manifold discriminative analysis for feature extraction of hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol.54, no.10, pp.6197–6211, 2016. doi: 10.1109/TGRS.2016.2583219
    [34]
    G. Buchsbaum, “A spatial processor model for object colour perception,” Journal of the Franklin Institute, vol.310, no.1, pp.1–26, 1980. doi: 10.1016/0016-0032(80)90058-7
    [35]
    Z. Q. Zhu, H. P. Yin, Y. Chai, et al., “A novel multi-modality image fusion method based on image decomposition and sparse representation,” Information Sciences, vol.432, pp.516–529, 2018. doi: 10.1016/j.ins.2017.09.010
    [36]
    C. Lee, C. Lee, and C. S. Kim, “Contrast enhancement based on layered difference representation,” in 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, pp.965–968, 2012.
    [37]
    K. D. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol.24, no.11, pp.3345–3356, 2015. doi: 10.1109/TIP.2015.2442920
    [38]
    D. T. Dang-Nguyen, C. Pasquini, V. Conotter, et al., “RAISE: A raw images dataset for digital image forensics,” in Proceedings of the 6th ACM Multimedia Systems Conference, New York, NY, USA, pp.219–224, 2015.
    [39]
    J. R. Cai, S. H. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Transactions on Image Processing, vol.27, no.4, pp.2049–2062, 2018. doi: 10.1109/TIP.2018.2794218
    [40]
    Z. Q. Ying, G. Li, and W. Gao, A bio-inspired multi-exposure fusion framework for low-light image enhancement, arXiv preprint arXiv: 1711.00591, 2017, doi: 10.48550/arXiv.1711.00591.
    [41]
    Y. Tai, J. Yang, X. M. Liu, et al., “MemNet: A persistent memory network for image restoration,” in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp.4549–4557, 2017.
    [42]
    Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol.13, no.4, pp.600–612, 2004. doi: 10.1109/TIP.2003.819861
    [43]
    M. Abadi, P. Barham, J. M. Chen, et al., “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, Savannah, GA, USA, pp.265–283, 2016.
    [44]
    Z. J. Zhao, B. S. Xiong, L. Wang, et al., “RetinexDIP: A unified deep framework for low-light image enhancement,” IEEE Transactions on Circuits and Systems for Video Technology, vol.32, no.3, pp.1076–1088, 2022. doi: 10.1109/TCSVT.2021.3073371
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(1)

    Article Metrics

    Article views (259) PDF downloads(20) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return