Citation: | YI Shi, LIU Xi, LI Li, CHENG Xinghao, WANG Cheng. Infrared and Visible Image Fusion Based on Blur Suppression Generative Adversarial Network[J]. Chinese Journal of Electronics, 2023, 32(1): 177-188. doi: 10.23919/cje.2021.00.084 |
[1] |
X. Jin, Q. Jiang, S. Yao, et al., “A survey of infrared and visual image fusion methods,” Infrared Physics & Technology, vol.85, pp.478–501, 2017. doi: 10.1016/j.infrared.2017.07.010
|
[2] |
J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Information Fusion, vol.45, pp.153–178, 2019. doi: 10.1016/j.inffus.2018.02.004
|
[3] |
J. Han, H. Chen, N. Liu, et al., “CNNs-based RGB-D saliency detection via cross-view transfer and multi-view fusion,” IEEE Trans on Cybernetics, vol.48, no.11, pp.3171–3183, 2018. doi: 10.1109/TCYB.2017.2761775
|
[4] |
S. Li, B. Yang, and J. Hu, “Performance comparison of different multi-resolution transforms for image fusion,” Information Fusion, vol.12, no.2, pp.74–84, 2011. doi: 10.1016/j.inffus.2010.03.002
|
[5] |
J. Ma, C. Chen, C. Li, et al., “Infrared and visible image fusion via gradient transfer and total variation minimization,” Information Fusion, vol.31, pp.100–109, 2016. doi: 10.1016/j.inffus.2016.02.001
|
[6] |
Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proceedings of the IEEE, vol.87, no.8, pp.1315–1326, 1999. doi: 10.1109/5.775414
|
[7] |
S. Yu and X. Chen, “Infrared and visible image fusion based on a latent low-rank representation nested with multiscale geometric transform,” IEEE Access, vol.8, pp.110214–110226, 2020. doi: 10.1109/ACCESS.2020.3001974
|
[8] |
J. Wang, J. Peng, X. Feng, et al., “Fusion method for infrared and visible images by using non-negative sparse representation,” Infrared Physics & Technology, vol.67, pp.477–489, 2014. doi: 10.1016/j.infrared.2014.09.019
|
[9] |
Q. Zhang, Y. Fu, H. Li, et al., “Dictionary learning method for joint sparse representation-based image fusion,” Optical Engineering, vol.52, no.5, article no.057006, 2013. doi: 10.1117/1.OE.52.5.057006
|
[10] |
C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Physics & Technology, vol.83, pp.94–102, 2017. doi: 10.1016/j.infrared.2017.04.018
|
[11] |
W. Kong, Y. Lei, and H. Zhao, “Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization,” Infrared Physics &Technology, vol.67, pp.161–172, 2014. doi: 10.1016/j.infrared.2014.07.019
|
[12] |
R. Ibrahim, J. Alirczaic, P. Babyn, et al., “Pixel level jointed sparse representation with RPCA image fusion algorithm,” in Proceedings of 38th International Conference on Telecommunications and Signal Processing, Prague, Czech Republic, pp.592–595, 2015.
|
[13] |
X. Zhang, Y. Ma, F. Fan, et al., “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” Journal of the Optical Society of America A - Optics Image Science and Vision, vol.34, no.8, pp.1400–1410, 2017. doi: 10.1364/JOSAA.34.001400
|
[14] |
J. Zhao, Y. Chen, H. Feng, et al., “Infrared image enhancement through saliency feature analysis based on multi-scale decomposition,” Infrared Physics & Technology, vol.62, pp.86–93, 2014. doi: 10.1016/j.infrared.2013.11.008
|
[15] |
C. Zhao, Y. Huang, and S. Qiu, “Infrared and visible image fusion algorithm based on saliency detection and adaptive double-channel spiking cortical model,” Infrared Physics & Technology, vol.102, article no.102976, 2019. doi: 10.1016/j.infrared.2019.102976
|
[16] |
V. Naidu, “Hybrid ddct-pca based multi sensor image fusion,” Journal of Optics, vol.43, no.1, pp.48–61, 2014. doi: 10.1007/s12596-013-0148-7
|
[17] |
Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion, vol.24, pp.147–164, 2015. doi: 10.1016/j.inffus.2014.09.004
|
[18] |
J. Ma, Z. Zhou, B. Wang, et al., “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Physics & Technology, vol.82, pp.8–17, 2017. doi: 10.1016/j.infrared.2017.02.005
|
[19] |
Y. Liu, X. Chen, J. Cheng, H. Peng, et al., “Infrared and visible image fusion with convolutional neural networks,” International Journal of Wavelets, Multiresolution and Information Processing, vol.16, no.2, article no.1850018, 2018. doi: 10.1142/S0219691318500182
|
[20] |
H. Li, X. Wu, and J. Kittler, “Infrared and Visible Image Fusion using a Deep Learning Framewo,” in Proc. of 24th International Conference on Pattern Recognition (ICPR), Beijing, China, pp.2705–2710, 2018.
|
[21] |
H. Li and X. J. Wu, “DenseFuse: A Fusion Approach to Infrared and Visible Images,” IEEE Transactions on Image Processing, vol.28, no.5, pp.2614–2623, 2018. doi: 10.1109/TIP.2018.2887342
|
[22] |
H. Li, X. J. Wu, and T. S. Durrani, “Infrared and Visible Image Fusion with ResNet and zero-phase component analysis,” Infrared Physics & Technology, vol.102, article no.103039, 2019. doi: 10.1016/j.infrared.2019.103039
|
[23] |
W. B. An and H. M. Wang, “Infrared and visible image fusion with supervised convolutional neural network,” Optik, vol.219, article no.165120, 2020. doi: 10.1016/j.ijleo.2020.165120
|
[24] |
J. Li, H. Huo, C. Li, et al., “AttentionFGAN: Infrared and Visible Image Fusion using Attention-based Generative Adversarial Network,” IEEE Transactions on Multimedia, vol.23, pp.1383–1396, 2020. doi: 10.1109/TMM.2020.2997127
|
[25] |
J. Li, H. Huo, K. Liu, et al., “Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance,” Information Sciences, vol.529, pp.28–41, 2020. doi: 10.1016/j.ins.2020.04.035
|
[26] |
J. Ma, W. Yu, P. Liang, et al., “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion, vol.48, pp.11–26, 2019. doi: 10.1016/j.inffus.2018.09.004
|
[27] |
S. Yi, J. J. Li, and X. S. Yuan, “DFPGAN: Dual fusion path generative adversarial network for infrared and visible image fusion,” Infrared Physics & Technology, vol.119, article no.103947, 2021. doi: 10.1016/j.infrared.2021.103947
|
[28] |
H. Cai, L. Zou, P. Zhu, et al., “Fusion of infrared and visible images based on non-subsampled contourlet transform and intuitionistic fuzzy set,” Acta Photonica Sinica, vol.47, no.6, pp.225–234, 2018. (in Chinese)
|
[29] |
I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27 (NIPS 2014), Curran Associates, Inc., Red Hook, NY, USA, available at: https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf, 2014.
|
[30] |
X. S. Wang, Y. Li, and Y. H. Cheng, “Hyperspectral Image Classification Based on Unsupervised Heterogeneous Domain Adaptation CycleGan,” Chinese Journal of Electronics, vol.29, no.4, pp.608–614, 2020. doi: 10.1049/cje.2020.05.003
|
[31] |
G. Jin, Y. Zhang and K. Lu, “Deep Hashing Based on VAE-GAN for Efficient Similarity Retrieval,” Chinese Journal of Electronics, vol.28, no.6, pp.1191–1197, 2019.
|
[32] |
X. Guo, R. Nie, J. Cao, et al., “Fusegan: Learning to fuse multi-focus image via conditional generative adversarial network,” IEEE Transactions on Multimedia, vol.21, no.8, pp.1982–199, 2019. doi: 10.1109/TMM.2019.2895292
|
[33] |
J. Huang, Z. Le, Y. Ma, et al., “A generative adversarial network with adaptive constraints for multi-focus image fusion,” Neural Computing and Applications, vol.32, no.18, pp.15119–15129, 2020. doi: 10.1007/s00521-020-04863-1
|
[34] |
J. Ma, W. Yu, C. Chen, et al., “Pan-GAN: An unsupervised learning method for pan-sharpening in remote sensing image fusion using a generative adversarial network,” Information Fusion, vol.62, pp.110–120, 2020. doi: 10.1016/j.inffus.2020.04.006
|
[35] |
J. Ma, P. Liang, W. Yu, et al., “Infrared and visible image fusion via detail preserving adversarial learning,” Information Fusion, vol.54, pp.85–98, 2020. doi: 10.1016/j.inffus.2019.07.005
|
[36] |
J. Ma, H. Xu, J. Jiang, et al., “DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion,” IEEE Transactions on Image Processing, vol.29, pp.4980–4995, 2020. doi: 10.1109/TIP.2020.2977573
|
[37] |
D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks, ” arXiv preprint, arXiv: 1703.10717, 2017.
|
[38] |
J. Xu, X. Shi, S. Qin, et al., “LBP-BEGAN: A generative adversarial network architecture for infrared and visible image fusion,” Infrared Physics & Technology, vol.104, article no.103144, 2020. doi: 10.1016/j.infrared.2019.103144
|
[39] |
T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern recognition, vol.29, no.1, pp.51–59, 1996. doi: 10.1016/0031-3203(95)00067-4
|
[40] |
X. Wang, K. Yu, S. Wu, et al., “ESRGAN: Enhanced super-resolution generative adversarial networks,” in Proc. of European Conference on Computer Vision, Springer, Cham, Switzerland, pp.63–79, 2018.
|
[41] |
G. Huang, Z. Liu, L. Van Der Maaten, et al., “Densely connected convolutional networks,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp.2261–2269, 2017.
|
[42] |
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of International Conference on Machine Learning, PMLR, Lille France, pp.448–456, 2015.
|
[43] |
P. Luo, J. Ren, Z. Peng, et al., “Differentiable learning-to-normalize via switchable normalization,” arXiv preprint, arXiv: 1806.10779, 2019.
|
[44] |
J. Chen, S. Shan, C. He, et al., “WLD: A robust local image descriptor,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.32, no.9, pp.1705–1720, 2010. doi: 10.1109/TPAMI.2009.155
|
[45] |
Y. J. Rao, “In-fibre bragg grating sensors,” Measurement Science and Technology, vol.8, no.4, article no.355, 1997. doi: 10.1088/0957-0233/8/4/002
|
[46] |
G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics letters, vol.38, no.37, pp.313–315, 2002. doi: 10.1049/el:20020212
|
[47] |
Y. Han, Y. Cai, Y. Cao, et al., “A new image fusion performance metric based on visual information fidelity,” Information Fusion, vol.14, no.2, pp.127–135, 2013. doi: 10.1016/j.inffus.2011.08.002
|