Abstract

We propose a deep learning based method to estimate high-resolution images from multiple fiber bundle images. Our approach first aligns raw fiber bundle image sequences with a motion estimation neural network and then applies a 3D convolution neural network to learn a mapping from aligned fiber bundle image sequences to their ground truth images. Evaluations on lens tissue samples and a 1951 USAF resolution target suggest that our proposed method can significantly improve spatial resolution for fiber bundle imaging systems.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Fiber bundle image restoration using deep learning

Jianbo Shao, Junchao Zhang, Xiao Huang, Rongguang Liang, and Kobus Barnard
Opt. Lett. 44(5) 1080-1083 (2019)

Resolution enhancement for fiber bundle imaging using maximum a posteriori estimation

Jianbo Shao, Wei-Chen Liao, Rongguang Liang, and Kobus Barnard
Opt. Lett. 43(8) 1906-1909 (2018)

Pixel super-resolution for lens-free holographic microscopy using deep learning neural networks

Zhenxiang Luo, Abdulkadir Yurt, Richard Stahl, Andy Lambrechts, Veerle Reumers, Dries Braeken, and Liesbet Lagae
Opt. Express 27(10) 13581-13595 (2019)

References

  • View by:
  • |
  • |
  • |

  1. M. Pierce, D. Yu, and R. Kortum, “High-resolution fiber-optic microendoscopy for in situ cellular imaging,” J. Vis. Exp. 47, 2306 (2011).
  2. J. Shao, W.-C. Liao, R. Liang, and K. Barnard, “Resolution enhancement for fiber bundle imaging using maximum a posteriori estimation,” Opt. Lett. 43, 1906–1909 (2018).
    [Crossref] [PubMed]
  3. J. Shao, J. Zhang, X. Huang, R. Liang, and K. Barnard, “Fiber bundle image restoration using deep learning,” Opt. Lett. 44, 1080–1083 (2019).
    [Crossref] [PubMed]
  4. X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.
  5. Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 3224–3232.
  6. G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 1858–1865 (2008).
    [Crossref] [PubMed]
  7. T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
    [Crossref]
  8. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).
  9. D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arVix preprint arXiv:1606.03798 (2016).
  10. S. Baker, A. Datta, and T. Kanade, “Parameterizing homographies,” Tech. Rep. CMU-RI-TR-06-11, Carnegie Mellon University, Pittsburgh, PA (2006).
  11. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  12. J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.
  13. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).
  14. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), (2010), pp. 807–814.
  15. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.
  16. G. Evangelidis, “Iat: A matlab toolbox for image alignment,” https://sites.google.com/site/imagealignment (2013).
  17. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref] [PubMed]
  18. H. R. Sheikh, A. C. Bovik, and G. D. Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117–2128 (2005).
    [Crossref] [PubMed]
  19. R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
    [Crossref]

2019 (1)

2018 (3)

J. Shao, W.-C. Liao, R. Liang, and K. Barnard, “Resolution enhancement for fiber bundle imaging using maximum a posteriori estimation,” Opt. Lett. 43, 1906–1909 (2018).
[Crossref] [PubMed]

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
[Crossref]

2011 (1)

M. Pierce, D. Yu, and R. Kortum, “High-resolution fiber-optic microendoscopy for in situ cellular imaging,” J. Vis. Exp. 47, 2306 (2011).

2008 (1)

G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 1858–1865 (2008).
[Crossref] [PubMed]

2005 (1)

H. R. Sheikh, A. C. Bovik, and G. D. Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117–2128 (2005).
[Crossref] [PubMed]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Abadi, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Acosta, A.

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Aitken, A.

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Baker, S.

S. Baker, A. Datta, and T. Kanade, “Parameterizing homographies,” Tech. Rep. CMU-RI-TR-06-11, Carnegie Mellon University, Pittsburgh, PA (2006).

Barham, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Barnard, K.

Bosse, S.

R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
[Crossref]

Bovik, A. C.

H. R. Sheikh, A. C. Bovik, and G. D. Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117–2128 (2005).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Caballero, J.

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Chen, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Chen, S. W.

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

Chen, Z.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Datta, A.

S. Baker, A. Datta, and T. Kanade, “Parameterizing homographies,” Tech. Rep. CMU-RI-TR-06-11, Carnegie Mellon University, Pittsburgh, PA (2006).

Davis, A.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Dean, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

DeTone, D.

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arVix preprint arXiv:1606.03798 (2016).

Devin, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Evangelidis, G. D.

G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 1858–1865 (2008).
[Crossref] [PubMed]

Gao, H.

X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.

Ghemawat, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Hinton, G. E.

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), (2010), pp. 807–814.

Huang, X.

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

Irving, G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Isard, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

Jia, J.

X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.

Jo, Y.

Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 3224–3232.

Kanade, T.

S. Baker, A. Datta, and T. Kanade, “Parameterizing homographies,” Tech. Rep. CMU-RI-TR-06-11, Carnegie Mellon University, Pittsburgh, PA (2006).

Kang, J.

Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 3224–3232.

Kim, S. J.

Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 3224–3232.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kortum, R.

M. Pierce, D. Yu, and R. Kortum, “High-resolution fiber-optic microendoscopy for in situ cellular imaging,” J. Vis. Exp. 47, 2306 (2011).

Kumar, V.

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

Kutyniok, G.

R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
[Crossref]

Ledig, C.

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Liang, R.

Liao, R.

X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.

Liao, W.-C.

Malisiewicz, T.

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arVix preprint arXiv:1606.03798 (2016).

Nair, V.

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), (2010), pp. 807–814.

Nguyen, T.

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

Oh, S. W.

Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 3224–3232.

Pierce, M.

M. Pierce, D. Yu, and R. Kortum, “High-resolution fiber-optic microendoscopy for in situ cellular imaging,” J. Vis. Exp. 47, 2306 (2011).

Psarakis, E. Z.

G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 1858–1865 (2008).
[Crossref] [PubMed]

Rabinovich, A.

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arVix preprint arXiv:1606.03798 (2016).

Reisenhofer, R.

R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
[Crossref]

Shao, J.

Sheikh, H. R.

H. R. Sheikh, A. C. Bovik, and G. D. Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117–2128 (2005).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Shi, W.

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Shivakumar, S. S.

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Simonyan, K.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

Tao, X.

X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.

Taylor, C. J.

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

Totz, J.

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Veciana, G. D.

H. R. Sheikh, A. C. Bovik, and G. D. Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117–2128 (2005).
[Crossref] [PubMed]

Wang, J.

X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

Wiegand, T.

R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
[Crossref]

Yu, D.

M. Pierce, D. Yu, and R. Kortum, “High-resolution fiber-optic microendoscopy for in situ cellular imaging,” J. Vis. Exp. 47, 2306 (2011).

Zhang, J.

Zisserman, A.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

IEEE Robot. Autom. Lett. (1)

T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar, “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).
[Crossref]

IEEE Trans. Image Process. (2)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

H. R. Sheikh, A. C. Bovik, and G. D. Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117–2128 (2005).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

G. D. Evangelidis and E. Z. Psarakis, “Parametric image alignment using enhanced correlation coefficient maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 1858–1865 (2008).
[Crossref] [PubMed]

J. Vis. Exp. (1)

M. Pierce, D. Yu, and R. Kortum, “High-resolution fiber-optic microendoscopy for in situ cellular imaging,” J. Vis. Exp. 47, 2306 (2011).

Opt. Lett. (2)

Signal Process. Image Commun. (1)

R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A haar wavelet-based perceptual similarity index for image quality assessment,” Signal Process. Image Commun. 61, 33–43 (2018).
[Crossref]

Other (11)

X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), pp. 4472–4480.

Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), pp. 3224–3232.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

D. DeTone, T. Malisiewicz, and A. Rabinovich, “Deep image homography estimation,” arVix preprint arXiv:1606.03798 (2016).

S. Baker, A. Datta, and T. Kanade, “Parameterizing homographies,” Tech. Rep. CMU-RI-TR-06-11, Carnegie Mellon University, Pittsburgh, PA (2006).

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4778–4787.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), (2010), pp. 807–814.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), (2016), pp. 265–283.

G. Evangelidis, “Iat: A matlab toolbox for image alignment,” https://sites.google.com/site/imagealignment (2013).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The pipeline of our proposed resolution enhancement method with multi-frame FB images input using deep learning. We choose the first FB frame F1 as the reference image for motion estimation. The raw FB image sequence is aligned by the spatial transformer with estimated homographies from the motion estimation network. Our 3D convolution network then takes the aligned FB image sequence as input and finally output one HR image.
Fig. 2
Fig. 2 Architecture of our 3D convolution network. n represents number of filters and s is stride size for each convolutional layer.
Fig. 3
Fig. 3 Evaluating the motion estimation network: the average MAE values against the training epoch on the validation dataset with two different unit numbers (128 and 512) of the first fully connected layer.
Fig. 4
Fig. 4 Evaluating the 3D convolution network (input sequence length 7): the average PSNR values of the validation dataset against the number of training epochs.
Fig. 5
Fig. 5 Experimental results with the lens tissue sample: (a) raw FB image, (b) ground truth FB image, (c) result from GARNN, (d) result from 3D convolution network with input sequence length 3, and (e) result from 3D convolution network with input sequence length 7. We draw the arrows of three different colors (red, green and blue) to mark three small representative areas, which show that 3D convolution network with input sequence length 7 can recover the finest details.
Fig. 6
Fig. 6 Experimental results with a 1951 USAF resolution target (input sequence length 7): (a) raw FB image, (b) GT image, (c) result from GARNN, (d) result from the motion estimation network and the 3D convolution network, (e) result from the ECC image alignment and the 3D convolution network and (f) result from the MAP method.
Fig. 7
Fig. 7 Experimental results with a 1951 USAF resolution target captured by directly attaching the FB probe to the sample (input sequence length 7): (a) raw FB image, (b) result from GARNN, (c) result from the motion estimation network and the 3D convolution network, (d) result from the ECC method and the 3D convolution network and (e) result from MAP method.

Tables (1)

Tables Icon

Table 1 Quantitative Measures on the Lens Tissue Sample Experimental Results

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

L motion = G j T H ˜ i j ( G i ) 1 ,
L 3 D = G 1 𝒩 3 D ( F ˜ ) 2 ,

Metrics