Volume 30 Issue 2
Apr.  2021
Turn off MathJax
Article Contents
RONG Chuanzhen, LIU Gaohang, PING Zhuolin, et al., “Fusion of Infrared and Visible Images Based on Infrared Object Extraction,” Chinese Journal of Electronics, vol. 30, no. 2, pp. 339-348, 2021, doi: 10.1049/cje.2020.11.013
Citation: RONG Chuanzhen, LIU Gaohang, PING Zhuolin, et al., “Fusion of Infrared and Visible Images Based on Infrared Object Extraction,” Chinese Journal of Electronics, vol. 30, no. 2, pp. 339-348, 2021, doi: 10.1049/cje.2020.11.013

Fusion of Infrared and Visible Images Based on Infrared Object Extraction

doi: 10.1049/cje.2020.11.013
Funds:

the Basic Frontier Innovation Project of Army Engineering University KYTYJQZL1908

More Information
  • Author Bio:

    RONG Chuanzhen   received the M.S. degree from Shandong University, China, in 2010. Now, he is a lecturer in Army Engineering University of PLA. His research focuses on information fusion.(Email: rcz@foxmail.com)

    LIU Gaohang   received the B.S. degree from Army Engineering University of PLA, China, in 2017. Now, he is an officer in PAP. His research focuses on information fusion.(Email: 1641688076@qq.com)

  • Corresponding author: XU Guanghui   (corresponding author) is an associate professor of Army Engineering University of PLA. He principally engaged in unmanned system application. (Email: 285556453@qq.com)
  • Received Date: 2019-12-19
  • Accepted Date: 2020-06-22
  • Publish Date: 2021-03-01
  • The ideal fused results of infrared and visible images, should contain the important infrared objects, and preserve the visible textural detail information as much as possible. The fused images are more consistent with human visual perception effect. For this purpose, a novel infrared and visible image fusion framework is proposed. Under the guidance of the model, the source images are decomposed into largescale edge, small-scale textural detail and coarse-scale base level information. Among which, the large-scale edge information contains the main infrared features, on this basis, the infrared image is further segmented into the object, transition and background regions by OTSU multi-threshold segmentation algorithm. In the end, the fused weights for the decomposed sub-information are determined by the segmented results, so that, the infrared object information can be effectively injected into the fused image, and the important visible textural detail information can be preserved as much as possible in the fused image. Experimental results show that, the proposed method can not only highlight the infrared objects, but also preserve the visual information in the visible image as much as possible. The fused results are superior to the commonly used representative fusion methods, both in subjective perception and objective evaluation.
  • loading
  • [1]
    Jin X, Jiang Q, Yao S, et al., "A survey of infrared and visual image fusion methods", Infrared Phys. Technol. , Vol. 85, pp. 478-501, 2017. doi: 10.1016/j.infrared.2017.07.010
    [2]
    S. Li, B. Yang and J. Hu, "Performance comparison of different multi-resolution transforms for image fusion", Inform. Fusion, Vol. 12, No. 2, pp. 74-84, 2011. doi: 10.1016/j.inffus.2010.03.002
    [3]
    J. Hu and S. Li, "The multiscale directional bilateral filter and its application to multisensory image fusion", Inf. Fusion, Vol. 13, No. 3, pp. 196-206, 2012. doi: 10.1016/j.inffus.2011.01.002
    [4]
    Z. Zhou, B. Wang, S. Li, et al., "Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters", Inf. Fusion, Vol. 30, pp. 15-26, 2016. doi: 10.1016/j.inffus.2015.11.003
    [5]
    J. Ma, Z. Zhou, B. Wang, et al., "Infrared and visible image fusion based on visual saliency map and weighted least square optimization", Infrared Phys. Technol. , Vol. 82, pp. 8-17, 2017. doi: 10.1016/j.infrared.2017.02.005
    [6]
    Y. Liu, S. Liu and Z Wang, "A general framework for image fusion based on multi-scale transform and sparse representation", Inf. Fusion, Vol. 24, pp. 147-164, 2015. doi: 10.1016/j.inffus.2014.09.004
    [7]
    B. Yang and S. Li, "Multifocus image fusion and restoration with sparse representation", IEEE Transactions on Instrumentation and Measurement, Vol. 59, No. 4, pp. 884-892, 2010. doi: 10.1109/TIM.2009.2026612
    [8]
    H. P. Yin, Z. D. Liu, B. Fang, et al., "A novel image fusion approach based on compressive sensing", Optics Communications, Vol. 354, pp. 299-313, 2015. doi: 10.1016/j.optcom.2015.05.020
    [9]
    Minjae Kim, David K. Han and Hanseok Ko, "Joint patch clustering-based dictionary learning for multimodal image fusion", Information Fusion, Vol. 27, pp. 198-214, 2016. doi: 10.1016/j.inffus.2015.03.003
    [10]
    Z. D. Liu, H. P. Yin, B. Fang, et al., "A novel fusion scheme for visible and infrared image based on compressive sensing", Optics Communications, Vol. 355, pp. 168-177, 2015.
    [11]
    G. Cui, H. Feng, Z. Xu, et al., "Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition", Optics Communications, Vol. 341, pp. 199-209, 2015. doi: 10.1016/j.optcom.2014.12.032
    [12]
    H. Li, X. Wu and J. Kittler, "Infrared and visible image fusion using a deep learning framework", Int. Conference on Pattern Recognition, Beijing, China, DOI: 10.1109/ICPR.2018.8546006,2018.
    [13]
    Y. Liu, X. Chen, J. Chen, et al., "Infrared and visible image fusion with convolutional neural networks", International Journal of Wavelets, Multiresolution and Information Processing, Vol. 16, No. 3, pp. 1-20, 2018.
    [14]
    Y. Zhang, Y. Liu, P. Sun, et al., "IFCNN: A general image fusion framework based on convolutional neural network", Information Fusion, Vol. 54, pp. 99-118, 2020. doi: 10.1016/j.inffus.2019.07.011
    [15]
    S. Li, X. Kang and J. Hu, "Image fusion with guided filtering", IEEE Trans. Im. Proc. , Vol. 22, No. 7, pp. 2864-2875, 2013.
    [16]
    Y. Zhang, L. Zhang and X. Bai, "Infrared and visual image fusion through infrared feature extraction and visual information preservation", Infrared Phys. Technol. , Vol. 83, pp. 227-237, 2017. doi: 10.1016/j.infrared.2017.05.007
    [17]
    N. Cvejic, C. N. Canagarajah and D. R. Bull, "Image fusion metric based on mutual information and Tsallis entropy", Electronics Letters, Vol. 42, No. 11, pp. 626-627, 2006. doi: 10.1049/el:20060693
    [18]
    C. S. Xydeas and V. Petrovic, "Objective image fusion performance measure", Military Technical Courier, Vol. 56, No. 4, pp. 181-193, 2000.
    [19]
    G. Piella and H. Heijmans, "A new quality metric for image fusion", International Conference on Image Processing, IEEE, DOI: 10.1109/ICIP.2003.1247209,2003.
    [20]
    S. Li, R. Hong and X. Wu, "A novel similarity based quality metric for image fusion", International Conference on Audio, Language and Image Processing, pp. 167-172, 2008.
    [21]
    Y. Chen and R. S. Blum, "A new automated quality assessment algorithm for image fusion", Image and Vision Computing, Vol. 27, No. 12, pp. 1421-1432, 2009.
    [22]
    M. Haghighat and M A. Razian, "Fast-FMI: Non-reference image fusion metric", 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), IEEE, pp. 1-3, 2014.
    [23]
    B K S. Kumar, "Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform", Signal, Image and Video Processing, Vol. 7, No. 6, pp. 1125-1143, 2013. doi: 10.1007/s11760-012-0361-x
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(1)

    Article Metrics

    Article views (960) PDF downloads(32) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return