283
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Infrared and visible images fusion by using sparse representation and guided filter

ORCID Icon, , , , &
Pages 254-263 | Received 05 Dec 2018, Accepted 11 Jul 2019, Published online: 01 Aug 2019

References

  • Agarwal, S., Awan, A., & Roth, D. (2004). Learning to detect objects in images via a sparse, part-based representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(11), 1475–1490. doi: 10.1109/TPAMI.2004.108
  • Agarwal, S., & Roth, D. (2002). Learning a sparse representation for object detection. In European Conference on Computer Vision (pp. 113–127), Berlin, Heidelberg.
  • Burt, P. J., & Adelson, E. H. (1983). The Laplacian pyramid as a compact image code. Readings in Computer Vision, 31(4), 532–540.
  • Dai, X., Zhao, J., & Li, D. (2018). Effective detection by fusing visible and infrared images of targets for unmanned surface vehicles. Automatika, 59(3-4), 323–330. doi: 10.1080/00051144.2018.1541150
  • Haghighat, M. B. A., Aghagolzadeh, A., & Seyedarabi, H. (2011). A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, 37(5), 744–756.
  • He, K., Sun, J., & Tang, X. (2010). Guided image filtering. In European conference on computer vision (pp. 1–14). Berlin, Heidelberg: Springer.
  • Jeon, H., Lee, J., & Sohn, K. (2018). Artificial intelligence for traffic signal control based solely on video images. Journal of Intelligent Transportation Systems, 22(5), 433–445. doi: 10.1080/15472450.2017.1394192
  • Lewis, J. J., O’Callaghan, R. J., Nikolov, S. G., Bull, D. R., & Canagarajah, N. (2007). Pixel- and region-based image fusion with complex wavelets. Information Fusion, 8(2), 119–130. doi: 10.1016/j.inffus.2005.09.006
  • Li, H., Manjunath, B. S., & Mitra, S. K. (1995). Multisensor image fusion using the wavelet transform. Graphical models and image processing, 57(3), 235–245.
  • Li, Q., Yang, X., Wu, W., Liu, K., & Jeon, G. (2018). Multi-focus image fusion method for vision sensor systems via dictionary learning with guided filter. Sensors, 18(7), 2143.
  • Li, Q., Yang, X., Wu, W., Liu, K., & Jeon, G. (2019). Pansharpening multispectral remote-sensing images with guided filter for monitoring impact of human behavior on environment. Concurrency and Computation: Practice and Experience, Published online, e5074.
  • Li, S., Kang, X., & Hu, J. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7), 2864–2875. doi: 10.1109/TIP.2013.2244222
  • Li, S., Yin, H., & Fang, L. (2013). Remote sensing image fusion via sparse representations over learned dictionaries. IEEE Transactions on Geoscience and Remote Sensing, 51(9), 4779–4789. doi: 10.1109/TGRS.2012.2230332
  • Li, Y., Khoshelham, K., Sarvi, M., & Haghani, M. (2019). Direct generation of level of service maps from images using convolutional and long short-term memory networks. Journal of Intelligent Transportation Systems, 23(3), 300–308. doi: 10.1080/15472450.2018.1563865
  • Liu, Y., Chen, X., Cheng, J., Peng, H., & Wang, Z. (2018). Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing, 16(03), 1850018. doi: 10.1142/S0219691318500182
  • Liu, Y., Chen, X., Peng, H., & Wang, Z. (2017). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191–207. doi: 10.1016/j.inffus.2016.12.001
  • Liu, Y., Chen, X., Ward, R. K., & Wang, Z. J. (2016). Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 23(12), 1882–1886. doi: 10.1109/LSP.2016.2618776
  • Liu, Y., Liu, S., & Wang, Z. (2015). A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion, 24, 147–164. doi: 10.1016/j.inffus.2014.09.004
  • Liu, Y., & Wang, Z. (2015). Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Processing, 9(5), 347–357. doi: 10.1049/iet-ipr.2014.0311
  • Ma, J., Ma, Y., & Li, C. (2019). Infrared and visible image fusion methods and applications: A survey. Information Fusion, 45, 153–178. doi: 10.1016/j.inffus.2018.02.004
  • Ma, J., Yu, W., Liang, P., Li, C., & Jiang, J. (2018). FusionGAN: A generative adversarial network for infrared and visible image fusion. Information Fusion, 48, 11–26. doi: 10.1016/j.inffus.2018.09.004
  • Miao, Q., & Wang, B. (2005). A novel adaptive multi-focus image fusion algorithm based on PCNN and sharpness. In Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense IV, Vol. 5778. International Society for Optics and Photonics.
  • Nejati, M., Samavi, S., & Shirani, S. (2015). Multi-focus image fusion using dictionary-based sparse representation. Information Fusion, 25, 72–84. doi: 10.1016/j.inffus.2014.10.004
  • Nencini, F., Garzelli, A., Baronti, S., & Alparone, L. (2007). Remote sensing image fusion using the curvelet transform. Information Fusion, 8(2), 143–156. doi: 10.1016/j.inffus.2006.02.001
  • Petrovic, V. S., & Xydeas, C. S. (2004). Gradient-based multiresolution image fusion. IEEE Transactions on Image Processing, 13(2), 228–237.
  • Qu, G., Zhang, D., & Yan, P. (2002). Information measure for performance of image fusion. Electronics Letters, 38(7), 313–315. doi: 10.1049/el:20020212
  • Toet, A. (1989). Image fusion by a ratio of low-pass pyramid. Pattern Recognition Letters, 9(4), 245–253. doi: 10.1016/0167-8655(89)90003-2
  • Wagner, A., Wright, J., Ganesh, A., Zhou, Z., Mobahi, H., & Ma, Y. (2012). Toward a practical face recognition system: Robust alignment and illumination by sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(2), 372–386.
  • Wang, M., Zhou, S., Yang, Z., Liu, Z., & Ren, S. (2019). Image fusion based on wavelet transform and gray-level features. Journal of Modern Optics, 66(1), 77–86. doi: 10.1080/09500340.2018.1512668
  • Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., & Ma, Y. (2009). Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 31(2), 210–227.
  • Wu, W., Yang, X., Liu, K., Liu, Y., Yan, B., & Hua, H. (2016). A new framework for remote sensing image super-resolution: Sparse representation-based method by processing dictionaries with multi-type features. Journal of Systems Architecture, 64, 63–75. doi: 10.1016/j.sysarc.2015.11.005
  • Xydeas, C. S., & Petrovic, V. (2000). Objective image fusion performance measure. Military Technical Courier, 56(2), 181–193.
  • Yang, B., & Li, S. (2010). Multifocus image fusion and restoration with sparse representation. IEEE Transactions on Instrumentation & Measurement, 59(4), 884–892.
  • Yang, X., Wu, W., Liu, K., Chen, W., Zhang, P., & Zhou, Z. (2017). Multi-sensor image super-resolution with fuzzy cluster by using multi-scale and multi-view sparse coding for infrared image. Multimedia Tools and Applications, 76(23), 24871–24902.
  • Yang, X., Wu, W., Liu, K., Chen, W., & Zhou, Z. (2018). Multiple dictionary pairs learning and sparse representation-based infrared image super-resolution with improved fuzzy clustering. Soft Computing, 22(5), 1385–1398. doi: 10.1007/s00500-017-2812-3
  • Zhang, Q., & Guo, B. L. (2009). Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing, 89(7), 1334–1346. doi: 10.1016/j.sigpro.2009.01.012
  • Zhang, Y., Wang, X., Xie, X., & Li, Y. (2018). Salient object detection via recursive sparse representation. Remote Sensing, 10(4), 652. doi: 10.3390/rs10040652
  • Zhou, Y., & Omar, M. (2009). Pixel-level fusion for infrared and visible acquisitions. International Journal of Optomechatronics, 3(1), 41–53. doi: 10.1080/15599610902717835
  • Zhou, Z., Wang, B., Li, S., & Dong, M. (2016). Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Information Fusion, 30, 15–26. doi: 10.1016/j.inffus.2015.11.003

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.