126
Views
0
CrossRef citations to date
0
Altmetric
Research Article

R2F-UGCGAN: a regional fusion factor-based union gradient and contrast generative adversarial network for infrared and visible image fusion

, , , , &
Pages 52-68 | Received 05 Jul 2022, Accepted 04 Jan 2023, Published online: 09 Feb 2023

References

  • Zhu Z, Qi G, Chai Y, et al. A novel visible-infrared image fusion framework for smart city. Int J Simul Process Model. 2018;13(2):144–155.
  • Chai X, Tian Y, Gan Z, et al. A robust compressed sensing image encryption algorithm based on gan and cnn. J Mod Opt. 2022;69(2):103–120.
  • Ma J, Ma Y, Li C. Infrared and visible image fusion methods and applications: a survey. Inf Fusion. 2019;45:153–178.
  • Jiang J, Liu L, Wang L, et al. Fusion of visible and infrared images based on multiple differential gradients. J Mod Opt. 2020;67(4):329–339.
  • Liu Y, Chen X, Wang Z, et al. Deep learning for pixel-level image fusion: recent advances and future prospects. Inf Fusion. 2018;42:158–173.
  • Li S, Yang B, Hu J. Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion. 2011;12(2):74–84.
  • Pajares G, De La Cruz JM. A wavelet-based image fusion tutorial. Pattern Recognit. 2004;37(9):1855–1872.
  • Zhang Z, Blum RS. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc IEEE. 1999;87(8):1315–1326.
  • Duan C, Wang Z, Xing C, et al. Infrared and visible image fusion using multi-scale edgepreserving decomposition and multiple saliency features. Optik. 2021;228:165775.
  • Yan H, Li Z. Infrared and visual image fusion based on multi-scale feature decomposition. Optik. 2020;203:163900.
  • Bavirisetti DP, Xiao G, Liu G. Multi-sensor image fusion based on fourth order partial differential equations. In: 2017 20th International conference on information fusion (Fusion); IEEE; 2017. p. 1–9.
  • Kong W, Lei Y, Zhao H. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization. Infrared Phys Technol. 2014;67:161–172.
  • Zhang X, Ma Y, Fan F, et al. Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition. JOSA A. 2017;34(8):1400–1410.
  • Zhao J, Chen Y, Feng H, et al. Infrared image enhancement through saliency feature analysis based on multi-scale decomposition. Infrared Phys Technol. 2014;62:86–93.
  • Li S, Kang X, Hu J. Image fusion with guided filtering. IEEE Trans Image Process. 2013;22(7):2864–2875.
  • Toet A. Iterative guided image fusion. PeerJ Comput Sci. 2016;2:e80.
  • Zhang Y, Wei W, Yuan Y. Multi-focus image fusion with alternating guided filtering. Signal Image Video Process. 2019;13(4):727–735.
  • Cuevas E, Becerra H, Luque A. Anisotropic diffusion filtering through multi-objective optimization. Math Comput Simul. 2021;181:410–429.
  • Zhou B, Luo Y, Yang M, et al. An improved adaptive detail enhancement algorithm for infrared images based on guided image filter. J Mod Opt. 2019;66(1):33–46.
  • Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221.
  • Li H, Wu XJ, Kittler J. Infrared and visible image fusion using a deep learning framework. In: 2018 24th International Conference on Pattern Recognition (ICPR); IEEE; 2018. p. 2705–2710.
  • Li H, Wu X, Durrani TS. Infrared and visible image fusion with resnet and zero-phase component analysis. Infrared Phys Technol. 2019;102:Article ID 103039.
  • Zhou J, Ren K, Wan M, et al. An infrared and visible image fusion method based on vgg-19 network. Optik. 2021;248:Article ID 168084.
  • He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 770–778.
  • Li H, Wu XJ. Densefuse: A fusion approach to infrared and visible images. IEEE Trans Image Process. 2018;28(5):2614–2623.
  • Ma J, Yu W, Liang P, et al. Fusiongan: A generative adversarial network for infrared and visible image fusion. Inf Fusion. 2019;48:11–26.
  • Ma J, Zhang H, Shao Z, et al. Ganmcc: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans Instrum Meas. 2020;70:1–14.
  • Luo X, Zhang J, Dai Q. A regional image fusion based on similarity characteristics. Signal Processing. 2012;92(5):1268–1280.
  • Zhang J, Wang Q, Zhao Y, et al. Multimodal region-consistent saliency based on foreground and background priors for indoor scene. J Mod Opt. 2016;63(17):1639–1651.
  • Jian L, Rayhana R, Ma L, et al. Infrared and visible image fusion based on deep decomposition network and saliency analysis. IEEE Trans Multimedia. 2021;24:3314–3326.
  • Du C, Gao S, Liu Y, et al. Multi-focus image fusion using deep support value convolutional neural network. Optik. 2019;176:567–578.
  • Patil U, Mudengudi U. Image fusion using hierarchical pca. In: 2011 international conference on image information processing; IEEE; 2011. p. 1–6.
  • Luo Y, Zhang T, Zhang Y. A novel fusion method of PCA and LDP for facial expression feature extraction. Optik. 2016;127(2):718–721.
  • Abdellahoum H, Mokhtari N, Brahimi A, et al. CSFCM: an improved fuzzy c-means image segmentation algorithm using a cooperative approach. Expert Syst Appl. 2021;166:Article ID 114063.
  • Chao SM, Tsai DM. An improved anisotropic diffusion model for detail and edge preserving smoothing. Pattern Recognit Lett. 2010;31(13):2012–2023.
  • Jidesh P, George S. A time-dependent switching anisotropic diffusion model for denoising and deblurring images. J Mod Opt. 2012;59(2):140–156.
  • Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell. 1990;12(7):629–639.
  • Tsiotsios C, Petrou M. On the choice of the parameters for anisotropic diffusion in image processing. Pattern Recognit. 2013;46(5):1369–1381.
  • Chen G. Image segmentation algorithm combined with regularized pm denoising model and improved watershed algorithm. J Med Imaging Health Inform. 2020;10(2):515–521.
  • Yang Q. Image denoising combining the P-M model and the LLT model. J Comput Commun. 2015;3(10):22.
  • Brito-Loeza C, Chen K. On high-order denoising models and fast algorithms for vector valued images. IEEE Trans Image Process. 2010;19(6):1518–1527.
  • Zhang X, Feng X. Texture preserving Perona-Malik model. In: 2011 4th International Congress on Image and Signal Processing; Vol. 2; IEEE; 2011. p. 812–815.
  • Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Adv Neural Inf Process Syst. 2014;27:2672–2680.
  • Ma J, Tang L, Fan F, et al. SwinFusion: cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA J Autom Sin. 2022;9(7):1200–1217.
  • Xu H, Gong M, Tian X, et al. CUFD: an encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition. Comput Vis Image Underst. 2022;218:Article ID 103407.
  • Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:151106434. 2015.
  • Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. In: International Conference on Machine Learning; PMLR; 2017. p. 214–223.
  • Mao X, Li Q, Xie H, et al. Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision; 2017. p. 2794–2802.
  • Catté F, Lions PL, Morel JM, et al. Image selective smoothing and edge detection by nonlinear diffusion. SIAM J Numer Anal. 1992;29(1):182–193.
  • Diamantaras KI, Papadimitriou T. Applying PCA neural models for the blind separation of signals. Neurocomputing. 2009;73(1–3):3–9.
  • Chavez P, Sides SC, Anderson JA, et al. Comparison of three different methods to merge multiresolution and multispectral data – LANDSAT TM and SPOT panchromatic. Photogramm Eng Remote Sensing. 1991;57(3):295–303.
  • Toet A. TNO image fusion dataset. 2014. DOI:10.6084/m9.figshare.1008029.v1
  • Liu Y, Chen X, Ward RK, et al. Image fusion with convolutional sparse representation. IEEE Signal Process Lett. 2016;23:1882–1886.
  • Yang B, Li S. Visual attention guided image fusion with sparse representation. Optik-Int J Light Electron Opt. 2014;125(17):4881-–4888.
  • Ma J, Chen C, Li C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion. 2016;31:100–109.
  • Ma J, Xu H, Jiang J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process. 2020;29:4980–4995.
  • Li H, Wu XJ. DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process. 2018;28(5):2614–2623.
  • Li H, Wu XJ, Kittler J. MDLatLRR: a novel decomposition method for infrared and visible image fusion. IEEE Trans Image Process. 2018;29:4733–4746.
  • Li H, Wu XJ, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/Channel attention models. IEEE Trans Instrum Meas. 2020;69:9645–9656.
  • Ma J, Tang L, Xu M, et al. STDFusionNet: an infrared and visible image fusion network based on salient target detection. IEEE Trans Instrum Meas. 2021;70:1–13.
  • Xu H, Ma J, Jiang J, et al. U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell. 2020;44(1):502–518.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.