Abstract

Most imaging devices lose image information during the acquisition process due to their low dynamic range (LDR). Existing high dynamic range (HDR) imaging techniques have a trade-off with time or spatial resolution, resulting in potential motion blur or image misalignment. Current HDR methods are based on the fusion of multi-frame LDR images and can suffer from blurring of fine details, image aliasing, and image boundary effects. This study developed a dual-channel camera (DCC) to achieve HDR imaging, which can eliminate image motion blur and registration problems. Considering the output characteristics of the camera, we propose a weighted sparse representation multi-scale transform fusion algorithm, which fully preserves the original image information, while eliminating image aliasing and boundary problems in the fused image, resulting in high-quality HDR imaging.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition

Wei Tan, Huixin Zhou, Jiangluqi Song, Huan Li, Yue Yu, and Juan Du
Appl. Opt. 58(12) 3064-3073 (2019)

Image fusion via nonlocal sparse K-SVD dictionary learning

Ying Li, Fangyi Li, Bendu Bai, and Qiang Shen
Appl. Opt. 55(7) 1814-1823 (2016)

Image decomposition fusion method based on sparse representation and neural network

Lihong Chang, Xiangchu Feng, Rui Zhang, Hua Huang, Weiwei Wang, and Chen Xu
Appl. Opt. 56(28) 7969-7977 (2017)

References

  • View by:
  • |
  • |
  • |

  1. Y. J. Jung, “Enhancement of low light level images using color-plus-mono dual camera,” Opt. Express 25(10), 12029–12051 (2017).
    [Crossref] [PubMed]
  2. Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
    [Crossref] [PubMed]
  3. M. Mori, Y. Hirose, M. Segawa, I. Miyanaga, R. Miyagawa, T. Ueda, H. Nara, H. Masuda, S. Kishimura, and T. Sasaki, “Thin organic photoconductive film image sensors with extremely high saturation of 8500 electrons/µm 2,” in VLSI Technology Symposium IEEE, T22–T23 (2013).
  4. S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: Spatially varying pixel exposures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition IEEE, 472–479 (2000).
    [Crossref]
  5. OmniVision Technologies, “OV10630/ OV10635 HDR product brief” (OmniVision Technologies, 2013). https://www.ovt.com/sensors/OV10635
  6. O. N. Semiconductor, “MT9M034 1/3-inch CMOS digital image sensor” (ON Semiconductor, 2017). https://www.onsemi.com/pub/Collateral/MT9M034-D.PDF
  7. W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” J Computers 6(12), 2559–2566 (2011).
    [Crossref]
  8. A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognit. Lett. 9(4), 245–253 (1989).
    [Crossref]
  9. V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process. 13(2), 228–237 (2004).
    [Crossref] [PubMed]
  10. H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Proc. 57(3), 235–245 (1995).
    [Crossref]
  11. P. Borwonwatanadelok, W. Rattanapitak, and S. Udomhunsakul, “Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement,” in Proceedings of IEEE Conference on Electronic Computer Technology IEEE, 77–81 (2009).
    [Crossref]
  12. S. Ioannidou and V. Karathanassi, “Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion,” IEEE Geosci. Remote S. 4(1), 166–170 (2007).
    [Crossref]
  13. P. R. Hill, C. N. Canagarajah, and D. R. Bull, “Image fusion using complex wavelets,” in BMVC, 1–10 (2002).
    [Crossref]
  14. L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express 20(17), 18846–18860 (2012).
    [Crossref] [PubMed]
  15. M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
    [Crossref]
  16. Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing 89(7), 1334–1346 (2009).
    [Crossref]
  17. M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
    [Crossref]
  18. B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59(4), 884–892 (2010).
    [Crossref]
  19. L. Chen, J. Li, and C. L. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express 21(4), 5182–5197 (2013).
    [Crossref] [PubMed]
  20. S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,” IEEE Trans. Biomed. Eng. 59(12), 3450–3459 (2012).
    [Crossref] [PubMed]
  21. Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
    [Crossref]
  22. Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
    [Crossref]
  23. Gpixel, “Backside illuminated scientific CMOS image sensors” (Gpixel, 2014). http://en.gpixelinc.com/productMechanies/19.html
  24. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
    [Crossref]
  25. M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘Information measure for performance of image fusion,’,” Electron. Lett. 44(18), 1066–1067 (2008).
    [Crossref]
  26. M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Comput. Electr. Eng. 37(5), 744–756 (2011).
    [Crossref]
  27. Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
    [Crossref]
  28. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” in Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing IEEE, iii-709 (2004).
    [Crossref]
  29. G. Piella and H. Heijmans, “A new quality metric for image fusion,” Proceedings of the IEEE Conference on Image Processing IEEE, III-173 (2003).
    [Crossref]

2018 (1)

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

2017 (2)

Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref] [PubMed]

Y. J. Jung, “Enhancement of low light level images using color-plus-mono dual camera,” Opt. Express 25(10), 12029–12051 (2017).
[Crossref] [PubMed]

2015 (2)

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

2013 (2)

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

L. Chen, J. Li, and C. L. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express 21(4), 5182–5197 (2013).
[Crossref] [PubMed]

2012 (2)

L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express 20(17), 18846–18860 (2012).
[Crossref] [PubMed]

S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,” IEEE Trans. Biomed. Eng. 59(12), 3450–3459 (2012).
[Crossref] [PubMed]

2011 (2)

M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Comput. Electr. Eng. 37(5), 744–756 (2011).
[Crossref]

W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” J Computers 6(12), 2559–2566 (2011).
[Crossref]

2010 (1)

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59(4), 884–892 (2010).
[Crossref]

2009 (1)

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing 89(7), 1334–1346 (2009).
[Crossref]

2008 (1)

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘Information measure for performance of image fusion,’,” Electron. Lett. 44(18), 1066–1067 (2008).
[Crossref]

2007 (1)

S. Ioannidou and V. Karathanassi, “Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion,” IEEE Geosci. Remote S. 4(1), 166–170 (2007).
[Crossref]

2006 (1)

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

2005 (1)

M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
[Crossref]

2004 (1)

V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process. 13(2), 228–237 (2004).
[Crossref] [PubMed]

1995 (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Proc. 57(3), 235–245 (1995).
[Crossref]

1989 (1)

A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognit. Lett. 9(4), 245–253 (1989).
[Crossref]

Aghagolzadeh, A.

M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Comput. Electr. Eng. 37(5), 744–756 (2011).
[Crossref]

Aharon, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Blum, R. S.

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

Borwonwatanadelok, P.

P. Borwonwatanadelok, W. Rattanapitak, and S. Udomhunsakul, “Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement,” in Proceedings of IEEE Conference on Electronic Computer Technology IEEE, 77–81 (2009).
[Crossref]

Bruckstein, A.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Cai, Y.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Cao, Y.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Chang, F.

W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” J Computers 6(12), 2559–2566 (2011).
[Crossref]

Chen, C. L.

Chen, L.

Choi, M.

M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
[Crossref]

Creighton, D.

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘Information measure for performance of image fusion,’,” Electron. Lett. 44(18), 1066–1067 (2008).
[Crossref]

Dai, M.

Elad, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Fang, L.

S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,” IEEE Trans. Biomed. Eng. 59(12), 3450–3459 (2012).
[Crossref] [PubMed]

Guo, B.

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing 89(7), 1334–1346 (2009).
[Crossref]

Guo, L.

Haghighat, M. B. A.

M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Comput. Electr. Eng. 37(5), 744–756 (2011).
[Crossref]

Haibin Ling,

Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref] [PubMed]

Han, J.

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

Han, Y.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Hossny, M.

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘Information measure for performance of image fusion,’,” Electron. Lett. 44(18), 1066–1067 (2008).
[Crossref]

Ioannidou, S.

S. Ioannidou and V. Karathanassi, “Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion,” IEEE Geosci. Remote S. 4(1), 166–170 (2007).
[Crossref]

Jung, Y. J.

Karathanassi, V.

S. Ioannidou and V. Karathanassi, “Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion,” IEEE Geosci. Remote S. 4(1), 166–170 (2007).
[Crossref]

Kim, H. O.

M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
[Crossref]

Kim, R. Y.

M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
[Crossref]

Li, H.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Proc. 57(3), 235–245 (1995).
[Crossref]

Li, J.

Li, S.

S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,” IEEE Trans. Biomed. Eng. 59(12), 3450–3459 (2012).
[Crossref] [PubMed]

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59(4), 884–892 (2010).
[Crossref]

Liu, S.

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Liu, Y.

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Manjunath, B.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Proc. 57(3), 235–245 (1995).
[Crossref]

Mitra, S. K.

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Proc. 57(3), 235–245 (1995).
[Crossref]

Mitsunaga, T.

S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: Spatially varying pixel exposures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition IEEE, 472–479 (2000).
[Crossref]

Nahavandi, S.

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘Information measure for performance of image fusion,’,” Electron. Lett. 44(18), 1066–1067 (2008).
[Crossref]

Nam, M.-R.

M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
[Crossref]

Nayar, S. K.

S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: Spatially varying pixel exposures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition IEEE, 472–479 (2000).
[Crossref]

Nejati, M.

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Petrovic, V. S.

V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process. 13(2), 228–237 (2004).
[Crossref] [PubMed]

Rattanapitak, W.

P. Borwonwatanadelok, W. Rattanapitak, and S. Udomhunsakul, “Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement,” in Proceedings of IEEE Conference on Electronic Computer Technology IEEE, 77–81 (2009).
[Crossref]

Samavi, S.

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Seyedarabi, H.

M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Comput. Electr. Eng. 37(5), 744–756 (2011).
[Crossref]

Shirani, S.

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Tao, D.

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

Toet, A.

A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognit. Lett. 9(4), 245–253 (1989).
[Crossref]

Udomhunsakul, S.

P. Borwonwatanadelok, W. Rattanapitak, and S. Udomhunsakul, “Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement,” in Proceedings of IEEE Conference on Electronic Computer Technology IEEE, 77–81 (2009).
[Crossref]

Wang, W.

W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” J Computers 6(12), 2559–2566 (2011).
[Crossref]

Wang, Z.

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

Xiaojie Guo,

Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref] [PubMed]

Xu, X.

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

Xydeas, C. S.

V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process. 13(2), 228–237 (2004).
[Crossref] [PubMed]

Yang, B.

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59(4), 884–892 (2010).
[Crossref]

Yin, H.

S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,” IEEE Trans. Biomed. Eng. 59(12), 3450–3459 (2012).
[Crossref] [PubMed]

Yu Li,

Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref] [PubMed]

Zhang, Q.

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing 89(7), 1334–1346 (2009).
[Crossref]

Zhu, M.

Comput. Electr. Eng. (1)

M. B. A. Haghighat, A. Aghagolzadeh, and H. Seyedarabi, “A non-reference image fusion metric based on mutual information of image features,” Comput. Electr. Eng. 37(5), 744–756 (2011).
[Crossref]

Electron. Lett. (1)

M. Hossny, S. Nahavandi, and D. Creighton, “Comments on ‘Information measure for performance of image fusion,’,” Electron. Lett. 44(18), 1066–1067 (2008).
[Crossref]

Graph. Models Image Proc. (1)

H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Models Image Proc. 57(3), 235–245 (1995).
[Crossref]

IEEE Geosci. Remote S. (2)

S. Ioannidou and V. Karathanassi, “Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion,” IEEE Geosci. Remote S. 4(1), 166–170 (2007).
[Crossref]

M. Choi, R. Y. Kim, M.-R. Nam, and H. O. Kim, “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote S. 2(2), 136–140 (2005).
[Crossref]

IEEE Trans. Biomed. Eng. (1)

S. Li, H. Yin, and L. Fang, “Group-sparse representation with dictionary learning for medical image denoising and fusion,” IEEE Trans. Biomed. Eng. 59(12), 3450–3459 (2012).
[Crossref] [PubMed]

IEEE Trans. Image Process. (2)

V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process. 13(2), 228–237 (2004).
[Crossref] [PubMed]

Xiaojie Guo, Yu Li, and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process. 26(2), 982–993 (2017).
[Crossref] [PubMed]

IEEE Trans. Instrum. Meas. (1)

B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. 59(4), 884–892 (2010).
[Crossref]

IEEE Trans. Signal Process. (1)

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Inf. Fusion (4)

Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review,” Inf. Fusion 40, 57–75 (2018).
[Crossref]

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion 24, 147–164 (2015).
[Crossref]

M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Inf. Fusion 25, 72–84 (2015).
[Crossref]

Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion 14(2), 127–135 (2013).
[Crossref]

J Computers (1)

W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid,” J Computers 6(12), 2559–2566 (2011).
[Crossref]

Opt. Express (3)

Pattern Recognit. Lett. (1)

A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognit. Lett. 9(4), 245–253 (1989).
[Crossref]

Signal Processing (1)

Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing 89(7), 1334–1346 (2009).
[Crossref]

Other (9)

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” in Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing IEEE, iii-709 (2004).
[Crossref]

G. Piella and H. Heijmans, “A new quality metric for image fusion,” Proceedings of the IEEE Conference on Image Processing IEEE, III-173 (2003).
[Crossref]

P. Borwonwatanadelok, W. Rattanapitak, and S. Udomhunsakul, “Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement,” in Proceedings of IEEE Conference on Electronic Computer Technology IEEE, 77–81 (2009).
[Crossref]

M. Mori, Y. Hirose, M. Segawa, I. Miyanaga, R. Miyagawa, T. Ueda, H. Nara, H. Masuda, S. Kishimura, and T. Sasaki, “Thin organic photoconductive film image sensors with extremely high saturation of 8500 electrons/µm 2,” in VLSI Technology Symposium IEEE, T22–T23 (2013).

S. K. Nayar and T. Mitsunaga, “High dynamic range imaging: Spatially varying pixel exposures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition IEEE, 472–479 (2000).
[Crossref]

OmniVision Technologies, “OV10630/ OV10635 HDR product brief” (OmniVision Technologies, 2013). https://www.ovt.com/sensors/OV10635

O. N. Semiconductor, “MT9M034 1/3-inch CMOS digital image sensor” (ON Semiconductor, 2017). https://www.onsemi.com/pub/Collateral/MT9M034-D.PDF

Gpixel, “Backside illuminated scientific CMOS image sensors” (Gpixel, 2014). http://en.gpixelinc.com/productMechanies/19.html

P. R. Hill, C. N. Canagarajah, and D. R. Bull, “Image fusion using complex wavelets,” in BMVC, 1–10 (2002).
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Photographs of the (a) GSENSE400 sensor and (b) low-light DCC.
Fig. 2
Fig. 2 Schematic diagram of the multi-scale image fusion algorithm.
Fig. 3
Fig. 3 Schematic diagram of the SR image fusion algorithm.
Fig. 4
Fig. 4 Example image showing the drawback of the absolute-maximum fusion rule.
Fig. 5
Fig. 5 An example of a learned dictionary using K-SVD [24] method.
Fig. 6
Fig. 6 Schematic representation of the response curve of the DCC.
Fig. 7
Fig. 7 Schematic diagram of the wSR algorithm.
Fig. 8
Fig. 8 Flow diagram of the image fusion process of the DCC.
Fig. 9
Fig. 9 Two example DCC images split into two channels (LG, HG).
Fig. 10
Fig. 10 Fusion results of the first group of images using the MST, One-Level SR/wSR, SR-MST, and wSR-MST algorithms.
Fig. 11
Fig. 11 Fusion results of the second group of images processed using the MST, One-Level SR/wSR, SR-MST, and wSR-MST algorithms.
Fig. 12
Fig. 12 Magnified images of parts of the first group of fused images compared with the original two channels from the camera.
Fig. 13
Fig. 13 Magnified images of parts of the second group of fused images compared with the original two channels from the camera.

Tables (7)

Tables Icon

Table 1 Main Parameters of the GSENSE400 Sensor

Tables Icon

Table 2 Combinations of the Different Decomposition Methods and Low-frequency Fusion Methods Used in This Paper

Tables Icon

Table 3 Objective Evaluation Indexes for the First Group of Images in Vertical

Tables Icon

Table 4 Objective Evaluation Indexes for the First Group of Images in Vertical

Tables Icon

Table 5 Objective Evaluation Indexes for the First Group of Images in Horizontal

Tables Icon

Table 6 Objective Evaluation Indexes for the First Group of Images in Horizontal

Tables Icon

Table 7 Running Time of Each Image Fused by the Different Algorithms

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

min x x 0 subject to Dxy <ε
min x x 0 subject to Dxy <Cε
{ l fusion = l i ω i + l i+1 ω i+1 ω i + ω i+1 δ fusion =max(|| δ i | | 1 ,|| δ i+1 | | 1 )
ω=exp( (x0.5) 2 2 σ 2 )
En=- i=0 N pi*logpi
MI= f=0 L a=0 L p FA (f,a)· log 2 p FA (f,a) p F (f) p A (a)
Q 0 = 4 σ xy x ¯ y ¯ ( x ¯ 2 + y ¯ 2 )( σ x ¯ 2 + σ y ¯ 2 ) Q(a,b,f)= 1 | W | ωW (λ(ω) Q 0 (a,f|ω)+(1λ(ω)) Q 0 (b,f|ω)) Q W (a,b,f)= ωW c(ω)(λ(ω) Q 0 (a,f|ω)+(1λ(ω)) Q 0 (b,f|ω)) Q E (a,b,f)= Q W (a,b,f)· Q W (a,b,f) α

Metrics