Abstract

Since the infrared (IR) image and the visible (VI) image reflect different contents of the same scene, it is not appropriate to fuse them with the same representation and similar features. Gradient transfer fusion (GTF) using ${\ell _1}$ norm can well address the issue. This paper demonstrates that using ${\ell _2}$ norm can also address it well, based on our novel proposed fusion model. We formulate the fusion task as an ${\ell _2}$ norm optimization problem, where the first term measured by ${\ell _2}$ norm tends to constrain the fused image to have similar pixel intensities as the IR image, and the second term computed by ${\ell _2}$ norm tends to force the fused image to have similar gradient distribution as the VI image. As the fused image obtained by directly optimizing ${\ell _2}$ norm is smooth, we introduce two weights into our objective function to address this issue, which is inspired by a weighted least squares filtering (WLSF) framework. Different from ${\ell _1}$ norm-based methods such as GTF, our method can obtain the mathematical function formula between the source images and the fusion result as ${\ell _2}$ norm is differentiable, which is effective and efficient. The mathematical function formula makes our method not only significantly different from the current fusion methods, but also lower in computation cost than most fusion methods. Experimental results demonstrate that our method outperforms GTF and most state-of-the-art fusion methods in terms of visual quality and evaluation metrics, where our fusion results look like IR images with abundant VI appearance information.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition

Wei Tan, Huixin Zhou, Jiangluqi Song, Huan Li, Yue Yu, and Juan Du
Appl. Opt. 58(12) 3064-3073 (2019)

Infrared and visible image fusion based on total variation and augmented Lagrangian

Hanqi Guo, Yong Ma, Xiaoguang Mei, and Jiayi Ma
J. Opt. Soc. Am. A 34(11) 1961-1968 (2017)

Fusion of infrared and visible images based on focus measure operators in the curvelet domain

Shao Zhenfeng, Liu Jun, and Cheng Qimin
Appl. Opt. 51(12) 1910-1921 (2012)

References

  • View by:
  • |
  • |
  • |

  1. J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion 45, 153–178 (2019).
    [Crossref]
  2. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).
    [Crossref]
  3. X. Yan, H. Qin, J. Li, H. Zhou, J. Zong, and Q. Zeng, “Infrared and visible image fusion using multiscale directional nonlocal means filter,” Appl. Opt. 54(13), 4299–4308 (2015).
    [Crossref]
  4. S. Zhenfeng, L. Jun, and C. Qimin, “Fusion of infrared and visible images based on focus measure operators in the curvelet domain,” Appl. Opt. 51(12), 1910–1921 (2012).
    [Crossref]
  5. Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55(23), 6480–6490 (2016).
    [Crossref]
  6. X. Yan, H. Qin, J. Li, H. Zhou, and J. G. Zong, “Infrared and visible image fusion with spectral graph wavelet transform,” J. Opt. Soc. Am. A 32(9), 1643–1652 (2015).
    [Crossref]
  7. X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34(8), 1400–1410 (2017).
    [Crossref]
  8. Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
    [Crossref]
  9. Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
    [Crossref]
  10. J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
    [Crossref]
  11. X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
    [Crossref]
  12. D. P. Bavirisetti and R. Dhuli, “Two-scale image fusion of visible and infrared images using saliency detection,” Infrared Phys. Technol. 76, 52–64 (2016).
    [Crossref]
  13. B. Cheng, L. Jin, and G. Li, “General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform,” Infrared Phys. Technol. 92, 68–77 (2018).
    [Crossref]
  14. V. P. S. Naidu, “Image Fusion Technique using Multi-resolution Singular Value Decomposition,” Def. Sci. J. 61(5), 479–484 (2011).
    [Crossref]
  15. W. Tan, H. Zhou, J. Song, H. Li, Y. Yu, and J. Du, “Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition,” Appl. Opt. 58(12), 3064–3073 (2019).
    [Crossref]
  16. J. Zhu, W. Jin, L. Li, Z. Han, and X. Wang, “Fusion of the low-light-level visible and infrared images for night-vision context enhancement,” Chin. Opt. Lett. 16(1), 013501 (2018).
    [Crossref]
  17. J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
    [Crossref]
  18. Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
    [Crossref]
  19. J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
    [Crossref]
  20. Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
    [Crossref]
  21. J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
    [Crossref]
  22. X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.
  23. H. Li and X.-J. Wu, “DenseFuse: A Fusion Approach to Infrared and Visible Images,” IEEE Trans. Image Process. 28(5), 2614–2623 (2019).
    [Crossref]
  24. H. Li, X.-J. Wu, and T. S. Durrani, “Infrared and Visible Image Fusion with ResNet and zero-phase component analysis,” arXiv preprint arXiv:1806.07119 (2018).
  25. H. Li, X.-J. Wu, and J. Kittler, “Infrared and Visible Image Fusion using a Deep Learning Framework,” in 2018 24th International Conference on Pattern Recognition (ICPR) (IEEE, 2018).
  26. H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.
  27. J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
    [Crossref]
  28. Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
    [Crossref]
  29. H. Guo, Y. Ma, X. Mei, and J. Ma, “Infrared and visible image fusion based on total variation and augmented Lagrangian,” J. Opt. Soc. Am. A 34(11), 1961–1968 (2017).
    [Crossref]
  30. B. Cheng, L. Jin, and G. Li, “Infrared and low-light-level image fusion based on ℓ2-energy minimization and mixed-ℓ1-gradient regularization,” Infrared Phys. Technol. 96, 163–173 (2019).
    [Crossref]
  31. C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Phys. Technol. 83, 94–102 (2017).
    [Crossref]
  32. H. Li and X.-J. Wu, “Infrared and visible image fusion using Latent Low-Rank Representation,” arXiv preprint arXiv:1804.08992 (2018).
  33. H. Li and X.-J. Wu, “Infrared and visible image fusion using a novel deep decomposition method,” arXiv preprint arXiv:1811.02291 (2018).
  34. G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Inf. Fusion 4(4), 259–280 (2003).
    [Crossref]
  35. D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform,” IEEE Sens. J. 16(1), 203–209 (2016).
    [Crossref]
  36. Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
    [Crossref]
  37. J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
    [Crossref]
  38. J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
    [Crossref]
  39. J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
    [Crossref]
  40. J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
    [Crossref]
  41. A. Toet, “TNO Image Fusion Dataset,” (April, 2015), http://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029 .
  42. H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995).
    [Crossref]
  43. D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations,” in International Conference on Information Fusion (ICIF) (IEEE, 2017), pp. 1–9.
  44. Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
    [Crossref]
  45. M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
    [Crossref]
  46. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref]
  47. Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
    [Crossref]

2020 (3)

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

2019 (5)

H. Li and X.-J. Wu, “DenseFuse: A Fusion Approach to Infrared and Visible Images,” IEEE Trans. Image Process. 28(5), 2614–2623 (2019).
[Crossref]

B. Cheng, L. Jin, and G. Li, “Infrared and low-light-level image fusion based on ℓ2-energy minimization and mixed-ℓ1-gradient regularization,” Infrared Phys. Technol. 96, 163–173 (2019).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

W. Tan, H. Zhou, J. Song, H. Li, Y. Yu, and J. Du, “Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition,” Appl. Opt. 58(12), 3064–3073 (2019).
[Crossref]

J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion 45, 153–178 (2019).
[Crossref]

2018 (3)

J. Zhu, W. Jin, L. Li, Z. Han, and X. Wang, “Fusion of the low-light-level visible and infrared images for night-vision context enhancement,” Chin. Opt. Lett. 16(1), 013501 (2018).
[Crossref]

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

B. Cheng, L. Jin, and G. Li, “General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform,” Infrared Phys. Technol. 92, 68–77 (2018).
[Crossref]

2017 (7)

J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
[Crossref]

X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
[Crossref]

X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34(8), 1400–1410 (2017).
[Crossref]

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Phys. Technol. 83, 94–102 (2017).
[Crossref]

Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
[Crossref]

H. Guo, Y. Ma, X. Mei, and J. Ma, “Infrared and visible image fusion based on total variation and augmented Lagrangian,” J. Opt. Soc. Am. A 34(11), 1961–1968 (2017).
[Crossref]

2016 (7)

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
[Crossref]

D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform,” IEEE Sens. J. 16(1), 203–209 (2016).
[Crossref]

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
[Crossref]

Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55(23), 6480–6490 (2016).
[Crossref]

D. P. Bavirisetti and R. Dhuli, “Two-scale image fusion of visible and infrared images using saliency detection,” Infrared Phys. Technol. 76, 52–64 (2016).
[Crossref]

Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
[Crossref]

2015 (5)

X. Yan, H. Qin, J. Li, H. Zhou, and J. G. Zong, “Infrared and visible image fusion with spectral graph wavelet transform,” J. Opt. Soc. Am. A 32(9), 1643–1652 (2015).
[Crossref]

X. Yan, H. Qin, J. Li, H. Zhou, J. Zong, and Q. Zeng, “Infrared and visible image fusion using multiscale directional nonlocal means filter,” Appl. Opt. 54(13), 4299–4308 (2015).
[Crossref]

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
[Crossref]

2013 (2)

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).
[Crossref]

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

2012 (1)

2011 (1)

V. P. S. Naidu, “Image Fusion Technique using Multi-resolution Singular Value Decomposition,” Def. Sci. J. 61(5), 479–484 (2011).
[Crossref]

2010 (1)

M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
[Crossref]

2008 (1)

Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

2003 (1)

G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Inf. Fusion 4(4), 259–280 (2003).
[Crossref]

1995 (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995).
[Crossref]

Bai, X.

Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
[Crossref]

Bavirisetti, D. P.

D. P. Bavirisetti and R. Dhuli, “Two-scale image fusion of visible and infrared images using saliency detection,” Infrared Phys. Technol. 76, 52–64 (2016).
[Crossref]

D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform,” IEEE Sens. J. 16(1), 203–209 (2016).
[Crossref]

D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations,” in International Conference on Information Fusion (ICIF) (IEEE, 2017), pp. 1–9.

Bhatti, A.

M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Cai, Y.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Cao, Y.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Chen, C.

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
[Crossref]

Chen, J.

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

Chen, X.

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
[Crossref]

Cheng, B.

B. Cheng, L. Jin, and G. Li, “Infrared and low-light-level image fusion based on ℓ2-energy minimization and mixed-ℓ1-gradient regularization,” Infrared Phys. Technol. 96, 163–173 (2019).
[Crossref]

B. Cheng, L. Jin, and G. Li, “General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform,” Infrared Phys. Technol. 92, 68–77 (2018).
[Crossref]

Cheng, J.

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

Creighton, D.

M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
[Crossref]

Dhuli, R.

D. P. Bavirisetti and R. Dhuli, “Two-scale image fusion of visible and infrared images using saliency detection,” Infrared Phys. Technol. 76, 52–64 (2016).
[Crossref]

D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform,” IEEE Sens. J. 16(1), 203–209 (2016).
[Crossref]

Ding, W. R.

C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Phys. Technol. 83, 94–102 (2017).
[Crossref]

Dong, M.

Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55(23), 6480–6490 (2016).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
[Crossref]

Du, J.

Durrani, T. S.

H. Li, X.-J. Wu, and T. S. Durrani, “Infrared and Visible Image Fusion with ResNet and zero-phase component analysis,” arXiv preprint arXiv:1806.07119 (2018).

Fan, F.

Farbman, Z.

Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
[Crossref]

Fattal, R.

Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
[Crossref]

Fu, Z.

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Gao, Z.

Guo, H.

Guo, X.

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

Han, Y.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Han, Z.

Hossny, M.

M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
[Crossref]

Hu, J.

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).
[Crossref]

Hu, T.

X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.

Huang, J.

X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34(8), 1400–1410 (2017).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
[Crossref]

Ji, Z.

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

Jiang, J.

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.

Jin, L.

B. Cheng, L. Jin, and G. Li, “Infrared and low-light-level image fusion based on ℓ2-energy minimization and mixed-ℓ1-gradient regularization,” Infrared Phys. Technol. 96, 163–173 (2019).
[Crossref]

B. Cheng, L. Jin, and G. Li, “General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform,” Infrared Phys. Technol. 92, 68–77 (2018).
[Crossref]

Jin, W.

Jun, L.

Kang, X.

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).
[Crossref]

Kittler, J.

H. Li, X.-J. Wu, and J. Kittler, “Infrared and Visible Image Fusion using a Deep Learning Framework,” in 2018 24th International Conference on Pattern Recognition (ICPR) (IEEE, 2018).

Li, C.

J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion 45, 153–178 (2019).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
[Crossref]

Li, G.

B. Cheng, L. Jin, and G. Li, “Infrared and low-light-level image fusion based on ℓ2-energy minimization and mixed-ℓ1-gradient regularization,” Infrared Phys. Technol. 96, 163–173 (2019).
[Crossref]

B. Cheng, L. Jin, and G. Li, “General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform,” Infrared Phys. Technol. 92, 68–77 (2018).
[Crossref]

Li, H.

H. Li and X.-J. Wu, “DenseFuse: A Fusion Approach to Infrared and Visible Images,” IEEE Trans. Image Process. 28(5), 2614–2623 (2019).
[Crossref]

W. Tan, H. Zhou, J. Song, H. Li, Y. Yu, and J. Du, “Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition,” Appl. Opt. 58(12), 3064–3073 (2019).
[Crossref]

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995).
[Crossref]

H. Li and X.-J. Wu, “Infrared and visible image fusion using a novel deep decomposition method,” arXiv preprint arXiv:1811.02291 (2018).

H. Li, X.-J. Wu, and J. Kittler, “Infrared and Visible Image Fusion using a Deep Learning Framework,” in 2018 24th International Conference on Pattern Recognition (ICPR) (IEEE, 2018).

H. Li, X.-J. Wu, and T. S. Durrani, “Infrared and Visible Image Fusion with ResNet and zero-phase component analysis,” arXiv preprint arXiv:1806.07119 (2018).

H. Li and X.-J. Wu, “Infrared and visible image fusion using Latent Low-Rank Representation,” arXiv preprint arXiv:1804.08992 (2018).

Li, J.

Li, L.

Li, S.

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).
[Crossref]

Li, X.

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

Li, Y.

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Liang, P.

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.

Lischinski, D.

Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
[Crossref]

Liu, C.

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Liu, C. H.

C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Phys. Technol. 83, 94–102 (2017).
[Crossref]

Liu, G.

D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations,” in International Conference on Information Fusion (ICIF) (IEEE, 2017), pp. 1–9.

Liu, Y.

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
[Crossref]

Liu, Z.

X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.

Luo, L.

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

Luo, X.

X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
[Crossref]

Ma, J.

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion 45, 153–178 (2019).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
[Crossref]

H. Guo, Y. Ma, X. Mei, and J. Ma, “Infrared and visible image fusion based on total variation and augmented Lagrangian,” J. Opt. Soc. Am. A 34(11), 1961–1968 (2017).
[Crossref]

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
[Crossref]

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
[Crossref]

H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.

Ma, Y.

J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion 45, 153–178 (2019).
[Crossref]

X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34(8), 1400–1410 (2017).
[Crossref]

H. Guo, Y. Ma, X. Mei, and J. Ma, “Infrared and visible image fusion based on total variation and augmented Lagrangian,” J. Opt. Soc. Am. A 34(11), 1961–1968 (2017).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
[Crossref]

Manjunath, B.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995).
[Crossref]

Mei, X.

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

H. Guo, Y. Ma, X. Mei, and J. Ma, “Infrared and visible image fusion based on total variation and augmented Lagrangian,” J. Opt. Soc. Am. A 34(11), 1961–1968 (2017).
[Crossref]

Meng, F.

X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.

Mitra, S. K.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995).
[Crossref]

Nahavandi, S.

M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
[Crossref]

Naidu, V. P. S.

V. P. S. Naidu, “Image Fusion Technique using Multi-resolution Singular Value Decomposition,” Def. Sci. J. 61(5), 479–484 (2011).
[Crossref]

Peng, H.

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

Piella, G.

G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Inf. Fusion 4(4), 259–280 (2003).
[Crossref]

Qi, Y.

C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Phys. Technol. 83, 94–102 (2017).
[Crossref]

Qimin, C.

Qin, H.

Qiu, W.

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

Ren, X.

X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Song, J.

Sun, P.

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

Szeliski, R.

Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
[Crossref]

Tan, W.

Tian, J.

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
[Crossref]

Toet, A.

A. Toet, “TNO Image Fusion Dataset,” (April, 2015), http://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029 .

Tu, Z.

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

Wang, B.

J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
[Crossref]

Wang, C.

X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.

Wang, X.

J. Zhu, W. Jin, L. Li, Z. Han, and X. Wang, “Fusion of the low-light-level visible and infrared images for night-vision context enhancement,” Chin. Opt. Lett. 16(1), 013501 (2018).
[Crossref]

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Wang, Z.

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Wang, Z. J.

Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
[Crossref]

Ward, R. K.

Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
[Crossref]

Wu, J.

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

Wu, X.

X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
[Crossref]

Wu, X.-J.

H. Li and X.-J. Wu, “DenseFuse: A Fusion Approach to Infrared and Visible Images,” IEEE Trans. Image Process. 28(5), 2614–2623 (2019).
[Crossref]

H. Li, X.-J. Wu, and T. S. Durrani, “Infrared and Visible Image Fusion with ResNet and zero-phase component analysis,” arXiv preprint arXiv:1806.07119 (2018).

H. Li and X.-J. Wu, “Infrared and visible image fusion using Latent Low-Rank Representation,” arXiv preprint arXiv:1804.08992 (2018).

H. Li, X.-J. Wu, and J. Kittler, “Infrared and Visible Image Fusion using a Deep Learning Framework,” in 2018 24th International Conference on Pattern Recognition (ICPR) (IEEE, 2018).

H. Li and X.-J. Wu, “Infrared and visible image fusion using a novel deep decomposition method,” arXiv preprint arXiv:1811.02291 (2018).

Xiao, G.

D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations,” in International Conference on Information Fusion (ICIF) (IEEE, 2017), pp. 1–9.

Xie, X.

Xu, H.

H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.

Xu, J.

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Xu, X.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Yan, H.

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

Yan, X.

Yong, M.

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

Yu, W.

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.

Yu, Y.

Yuan, G.

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

Zeng, Q.

Zhang, B.

X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
[Crossref]

Zhang, L.

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
[Crossref]

Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
[Crossref]

Zhang, X.

Zhang, Y.

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
[Crossref]

X. Zhang, Y. Ma, F. Fan, Y. Zhang, and J. Huang, “Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition,” J. Opt. Soc. Am. A 34(8), 1400–1410 (2017).
[Crossref]

Zhang, Z.

X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
[Crossref]

Zhao, J.

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
[Crossref]

Zhao, X.

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

Zhao, Y.

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Zhenfeng, S.

Zhou, H.

Zhou, N.

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Zhou, Z.

J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
[Crossref]

Z. Zhou, M. Dong, X. Xie, and Z. Gao, “Fusion of infrared and visible images for night-vision context enhancement,” Appl. Opt. 55(23), 6480–6490 (2016).
[Crossref]

Zhu, J.

Zong, H.

J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
[Crossref]

Zong, J.

Zong, J. G.

ACM Trans. Graph. (1)

Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph. 27(3), 1 (2008).
[Crossref]

Appl. Opt. (4)

Chin. Opt. Lett. (1)

Def. Sci. J. (1)

V. P. S. Naidu, “Image Fusion Technique using Multi-resolution Singular Value Decomposition,” Def. Sci. J. 61(5), 479–484 (2011).
[Crossref]

Electron. Lett. (1)

M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, “Image fusion performance metric based on mutual information and entropy driven quadtree decomposition,” Electron. Lett. 46(18), 1266–1268 (2010).
[Crossref]

Graph. Models Image Process. (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Process. 57(3), 235–245 (1995).
[Crossref]

IEEE Sens. J. (2)

X. Luo, Z. Zhang, B. Zhang, and X. Wu, “Image Fusion With Contextual Statistical Similarity and Nonsubsampled Shearlet Transform,” IEEE Sens. J. 17(6), 1760–1771 (2017).
[Crossref]

D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform,” IEEE Sens. J. 16(1), 203–209 (2016).
[Crossref]

IEEE Signal Process. Lett. (1)

Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Image fusion with convolutional sparse representation,” IEEE Signal Process. Lett. 23(12), 1882–1886 (2016).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

J. Ma, H. Zhou, Z. Ji, G. Yuan, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015).
[Crossref]

IEEE Trans. Image Process. (3)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).
[Crossref]

H. Li and X.-J. Wu, “DenseFuse: A Fusion Approach to Infrared and Visible Images,” IEEE Trans. Image Process. 28(5), 2614–2623 (2019).
[Crossref]

IEEE Trans. Signal Process. (1)

J. Ma, W. Qiu, Z. Ji, M. Yong, and Z. Tu, “Robust L2E Estimation of Transformation for Non-Rigid Registration,” IEEE Trans. Signal Process. 63(5), 1115–1129 (2015).
[Crossref]

Inf. Fusion (8)

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion 45, 153–178 (2019).
[Crossref]

Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Inf. Fusion 30, 15–26 (2016).
[Crossref]

J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Inf. Fusion 48, 11–26 (2019).
[Crossref]

Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network,” Inf. Fusion 54, 99–118 (2020).
[Crossref]

J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Inf. Fusion 54, 85–98 (2020).
[Crossref]

G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Inf. Fusion 4(4), 259–280 (2003).
[Crossref]

J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion 31, 100–109 (2016).
[Crossref]

Inf. Sci. (2)

J. Chen, X. Li, L. Luo, X. Mei, and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Inf. Sci. 508, 64–78 (2020).
[Crossref]

J. Ma, J. Jiang, C. Liu, and Y. Li, “Feature Guided Gaussian Mixture Model with Semi-Supervised EM and Local Geometric Constraint for Retinal Image Registration,” Inf. Sci. 417, 128–142 (2017).
[Crossref]

Infrared Phys. Technol. (7)

J. Ma, Z. Zhou, B. Wang, and H. Zong, “Infrared and visible image fusion based on visual saliency map and weighted least square optimization,” Infrared Phys. Technol. 82, 8–17 (2017).
[Crossref]

D. P. Bavirisetti and R. Dhuli, “Two-scale image fusion of visible and infrared images using saliency detection,” Infrared Phys. Technol. 76, 52–64 (2016).
[Crossref]

B. Cheng, L. Jin, and G. Li, “General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform,” Infrared Phys. Technol. 92, 68–77 (2018).
[Crossref]

Z. Fu, X. Wang, J. Xu, N. Zhou, and Y. Zhao, “Infrared and visible images fusion based on RPCA and NSCT,” Infrared Phys. Technol. 77, 114–123 (2016).
[Crossref]

Y. Zhang, L. Zhang, X. Bai, and L. Zhang, “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Technol. 83, 227–237 (2017).
[Crossref]

B. Cheng, L. Jin, and G. Li, “Infrared and low-light-level image fusion based on ℓ2-energy minimization and mixed-ℓ1-gradient regularization,” Infrared Phys. Technol. 96, 163–173 (2019).
[Crossref]

C. H. Liu, Y. Qi, and W. R. Ding, “Infrared and visible image fusion method based on saliency detection in sparse domain,” Infrared Phys. Technol. 83, 94–102 (2017).
[Crossref]

Int. J. Wavelets, Multiresolut. Inf. Process. (1)

Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” Int. J. Wavelets, Multiresolut. Inf. Process. 16(03), 1850018 (2018).
[Crossref]

J. Opt. Soc. Am. A (3)

Pattern Recognit. (1)

J. Ma, J. Zhao, Y. Ma, and J. Tian, “Non-rigid visible and infrared face registration via regularized Gaussian fields criterion,” Pattern Recognit. 48(3), 772–784 (2015).
[Crossref]

Other (8)

A. Toet, “TNO Image Fusion Dataset,” (April, 2015), http://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029 .

D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations,” in International Conference on Information Fusion (ICIF) (IEEE, 2017), pp. 1–9.

H. Li and X.-J. Wu, “Infrared and visible image fusion using Latent Low-Rank Representation,” arXiv preprint arXiv:1804.08992 (2018).

H. Li and X.-J. Wu, “Infrared and visible image fusion using a novel deep decomposition method,” arXiv preprint arXiv:1811.02291 (2018).

X. Ren, F. Meng, T. Hu, Z. Liu, and C. Wang, “Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN),” proceedings of International Conference on Intelligent Science and Big Data Engineering (Springer, 2018), pp. 301–307.

H. Li, X.-J. Wu, and T. S. Durrani, “Infrared and Visible Image Fusion with ResNet and zero-phase component analysis,” arXiv preprint arXiv:1806.07119 (2018).

H. Li, X.-J. Wu, and J. Kittler, “Infrared and Visible Image Fusion using a Deep Learning Framework,” in 2018 24th International Conference on Pattern Recognition (ICPR) (IEEE, 2018).

H. Xu, P. Liang, W. Yu, J. Jiang, and J. Ma, “Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators,” proceedings of Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) (2019), pp. 3954–3960.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a)–(b) are the IR image, VI image respectively. (c)–(f) are the fusion results obtained by ADF [35], GTF [27], FusionGAN [19], and our method.
Fig. 2.
Fig. 2. The source images used in our experiments.
Fig. 3.
Fig. 3. (a)–(b) are the IR image, VI image, respectively. (c)–(e) are the fusion results obtained by our method without weights. (f)–(h) are the fusion results obtained by our method with weights.
Fig. 4.
Fig. 4. (a)–(b) are the IR image, VI image respectively. (c)–(k) are the fusion results obtained by different fusion methods on “Bunker” image pair.
Fig. 5.
Fig. 5. (a)–(b) are the IR image, VI image respectively. (c)–(k) are the fusion results obtained by different fusion methods on “Lake” image pair.
Fig. 6.
Fig. 6. (a)–(b) are the IR image, VI image respectively. (c)–(k) are the fusion results obtained by different fusion methods on “Kaptein_1654” image pair.
Fig. 7.
Fig. 7. (a)–(b) are the IR image, VI image respectively. (c)–(k) are the fusion results obtained by different fusion methods on “Kaptein_1123” image pair.
Fig. 8.
Fig. 8. Fusion results obtained by our method on all the image pairs
Fig. 9.
Fig. 9. Quantitative results of six metrics on all the image pairs, the values in the each legend are the average metric value of each fusion method.

Tables (1)

Tables Icon

Table 1. The average running time of each method on all the image pairs (unit: second)

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

x u 2 2 ,
( x ( x v ) ) 2  +  ( y ( x v ) ) 2 2 2 ,
arg min x ( x u 2 2 + λ ( x ( x v ) ) 2  +  ( y ( x v ) ) 2 2 2 ) ,
arg min x ( x u 2 2 + λ a x ( u ) ( x ( x v ) ) 2  +  a y ( u ) ( y ( x v ) ) 2 2 2 ) ,
a x ( u )  =  ( | l x | α + ε )  -  1 , a y ( u )  =  ( | l y | α + ε )  -  1 ,
x = ( I + λ L )  -  1 ( u + λ L v ) ,

Metrics