
*corresponding author
AbstractLight Field Angular Super-Resolution (LFASR) is a critical task that enables applications such as depth estimation, refocusing, and 3D scene reconstruction. Acquiring LFASR from Plenoptic cameras has an inherent trade-off between the angular and spatial resolution due to sensor limitations. To address this challenge, many learning-based LFASR methods have been proposed; however, the reconstruction problem of LF with a wide baseline remains a significant challenge. In this study, we proposed an end-to-end learning-based geometry-aware network using multiple representations. A multi-scale residual network with varying receptive fields is employed to effectively extract spatial and angular features, enabling angular resolution enhancement without compromising spatial fidelity. Extensive experiments demonstrate that the proposed method effectively recovers fine details with high angular resolution while preserving the intricate parallax structure of the light field. Quantitative and qualitative evaluations on both synthetic and real-world datasets further confirm that the proposed approach outperforms existing state-of-the-art methods. This research improves the angular resolution of the light field without reducing spatial sharpness, supporting applications such as depth estimation and 3D reconstruction. The method is able to preserve parallax details and structure with better results than current methods.
KeywordsMultiple light fields, Geometry Aware Network, Unet, Multiscale Residual Network, Angular Reconstruction
|
DOIhttps://doi.org/10.26555/ijain.v11i3.1667 |
Article metricsAbstract views : 806 | PDF views : 27 |
Cite |
Full Text![]() |
References
[1] M. Levoy and P. Hanrahan, “Light field rendering,” Proc. 23rd Annu. Conf. Comput. Graph. Interact. Tech. SIGGRAPH 1996, pp. 31–42, 1996, doi: 10.1145/237170.237199.
[2] J. Peng, Z. Xiong, Y. Zhang, D. Liu, and F. Wu, “LF-fusion: Dense and accurate 3D reconstruction from light field images,” in 2017 IEEE Visual Communications and Image Processing, VCIP 2017, Feb. 2018, vol. 2018-Janua, pp. 1–4, doi: 10.1109/VCIP.2017.8305046.
[3] Q. Zhang, H. Li, X. Wang, and Q. Wang, “3D Scene Reconstruction with an Un-calibrated Light Field Camera,” Int. J. Comput. Vis., vol. 129, no. 11, pp. 3006–3026, 2021, doi: 10.1007/s11263-021-01516-1.
[4] Y. Zhang, W. Dai, M. Xu, J. Zou, X. Zhang, and H. Xiong, “Depth Estimation from Light Field Using Graph-Based Structure-Aware Analysis,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 4269–4283, 2020, doi: 10.1109/TCSVT.2019.2954948.
[5] S. Yun, J. Jang, and J. Paik, “Learning-based Light Field View Synthesis Using Multiplane Images,” 2023 Int. Conf. Electron. Information, Commun. ICEIC 2023, pp. 1–3, 2023, doi: 10.1109/ICEIC57457.2023.10049922.
[6] J. Fiss, B. Curless, and R. Szeliski, “Refocusing plenoptic images using depth-adaptive splatting,” 2014 IEEE Int. Conf. Comput. Photogr. ICCP 2014, pp. 1–9, 2014, doi: 10.1109/ICCPHOT.2014.6831809.
[7] Raytrix, “Raytrix,” 2024, [Online]. Available at: https://raytrix.de/raytrix-vision-2-3/.
[8] “‘Lytro illum.’”[Online]. Available at: http://clim.inria.fr/IllumDatasetLF/index.html.
[9] B. Wilburn et al., “High performance imaging using large camera arrays,” ACM Trans. Graph., vol. 24, no. 3, pp. 765–776, 2005, doi: 10.1145/1073204.1073259.
[10] D. Liu, Y. Mao, Y. Zuo, P. An, and Y. Fang, “Light Field Angular Super-Resolution Network Based on Convolutional Transformer and Deep Deblurring,” IEEE Trans. Comput. Imaging, vol. 10, pp. 1736–1748, 2024, doi: 10.1109/TCI.2024.3507634.
[11] A. Salem, E. Elkady, H. Ibrahem, J. W. Suh, and H. S. Kang, “Light Field Reconstruction with Dual Features Extraction and Macro-Pixel Upsampling,” IEEE Access, vol. PP, p. 1, 2024, doi: 10.1109/ACCESS.2024.3446592.
[12] S. Wang, H. Sheng, D. Yang, Z. Cui, R. Cong, and W. Ke, “MFSRNet: spatial-angular correlation retaining for light field super-resolution,” Appl. Intell., vol. 53, no. 17, pp. 20327–20345, 2023, doi: 10.1007/s10489-023-04558-9.
[13] D. Liu, Y. Huang, Y. Fang, Y. Zuo, and P. An, “Multi-Stream Dense View Reconstruction Network for Light Field Image Compression,” IEEE Trans. Multimed., vol. 25, pp. 4400–4414, 2023, doi: 10.1109/TMM.2022.3175023.
[14] D. Cai, Y. Chen, X. Huang, and P. An, “Disparity Enhancement-based Light Field Angular,” IEEE Signal Process. Lett., vol. PP, no. 8, pp. 1–5, 2024, doi: 10.1109/LSP.2024.3496582.
[15] L. Shi, H. Hassanieh, A. Davis, D. Katabi, and F. Durand, “Light field reconstruction using sparsity in the continuous Fourier domain,” ACM Trans. Graph., vol. 34, no. 1, pp. 1–13, 2014, doi: 10.1145/2682631.
[16] S. Vagharshakyan, R. Bregovic, and A. Gotchev, “Light Field Reconstruction Using Shearlet Transform,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 1, pp. 133–147, 2018, doi: 10.1109/TPAMI.2017.2653101.
[17] H. W. F. Yeung, J. Hou, J. Chen, Y. Y. Chung, and X. Chen, “Fast Light Field Reconstruction with Deep Coarse-to-Fine Modeling of Spatial-Angular Clues,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11210 LNCS, Springer International Publishing, 2018, pp. 138–154, doi: 10.1007/978-3-030-01231-1_9
[18] N. Meng, H. K. H. So, X. Sun, and E. Y. Lam, “High-Dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 873–886, 2021, doi: 10.1109/TPAMI.2019.2945027.
[19] Y. Wang et al., “Disentangling Light Fields for Super-Resolution and Disparity Estimation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 425–443, 2022, doi: 10.1109/TPAMI.2022.3152488.
[20] G. Liu, H. Yue, J. Wu, and J. Yang, “Efficient Light Field Angular Super-Resolution With Sub-Aperture Feature Learning and Macro-Pixel Upsampling,” IEEE Trans. Multimed., pp. 1–13, 2022, doi: 10.1109/TMM.2022.3211402.
[21] A. Salem, H. Ibrahem, and H.-S. Kang, “RCA-LF: Dense Light Field Reconstruction Using Residual Channel Attention Networks,” Sensors, vol. 22, no. 14, p. 5254, Jul. 2022, doi: 10.3390/s22145254.
[22] X. Wang, Z. Wang, and S. You, “Light field angular super resolution based on residual channel attention and classification up-sampling,” Multimed. Tools Appl., vol. 84, no. 12, pp. 10945–10967, May 2024, doi: 10.1007/s11042-024-19359-6.
[23] S. Wang, Y. Lu, W. Xia, P. Xia, Z. Wang, and W. Gao, “Light field angular super-resolution by view-specific queries,” Vis. Comput., vol. 41, no. 5, pp. 3565–3580, Mar. 2025, doi: 10.1007/s00371-024-03620-y.
[24] D. Li, R. Zhong, and Y. Yang, “Light Field Angular Super-Resolution via Spatial-Angular Correlation Extracted by Deformable Convolutional Network,” Sensors, vol. 25, no. 4, p. 991, Feb. 2025, doi: 10.3390/s25040991.
[25] G. Liu, H. Yue, and J. Yang, “Efficient Light Field Image Super-Resolution via Progressive Disentangling,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern RecognitionProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop, 2024, pp. 6277–6286, doi: 10.1109/CVPRW63382.2024.00631
[26] D. Li and R. Zhong, “Light Field Image Angular Super-Resolution Using Edge Features,” 2024 6th Int. Conf. Robot. Comput. Vision, ICRCV 2024, pp. 132–138, 2024, doi: 10.1109/ICRCV62709.2024.10758555.
[27] N. K. Kalantari, T. C. Wang, and R. Ramamoorthi, “Learning-based view synthesis for light field cameras,” ACM Trans. Graph., vol. 35, no. 6, 2016, doi: 10.1145/2980179.2980251.
[28] S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 3, pp. 606–619, 2014, doi: 10.1109/TPAMI.2013.147.
[29] J. Shi, X. Jiang, and C. Guillemot, “Learning fused pixel and feature-based view reconstructions for light fields,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2552–2561, 2020, doi: 10.1109/CVPR42600.2020.00263.
[30] N. Meng, K. Li, J. Liu, and E. Y. Lam, “Light Field View Synthesis via Aperture Disparity and Warping Confidence Map,” IEEE Trans. Image Process., vol. 30, no. April, pp. 3908–3921, 2021, doi: 10.1109/TIP.2021.3066293.
[31] G. Wu, Y. Liu, Q. Dai, and T. Chai, “Learning Sheared EPI Structure for Light Field Reconstruction,” IEEE Trans. Image Process., vol. 28, no. 7, pp. 3261–3273, 2019, doi: 10.1109/TIP.2019.2895463.
[32] J. Jin, J. Hou, H. Yuan, and S. Kwong, “Learning light field angular super-resolution via a geometry-aware network,” AAAI 2020 - 34th AAAI Conf. Artif. Intell., pp. 11141–11148, 2020, doi: 10.1609/aaai.v34i07.6771.
[33] W. Zhou, J. Shi, Y. Hong, L. Lin, and E. Engin Kuruoglu, “Robust dense light field reconstruction from sparse noisy sampling,” Signal Processing, vol. 186, p. 108121, 2021, doi: 10.1016/j.sigpro.2021.108121.
[34] D. Liu, Z. Tong, Y. Huang, Y. Chen, Y. Zuo, and Y. Fang, “Geometry-assisted multi-representation view reconstruction network for Light Field image angular super-resolution,” Knowledge-Based Syst., vol. 267, p. 110390, 2023, doi: 10.1016/j.knosys.2023.110390.
[35] M. Zubair, P. Nunes, C. Conti, and L. D. Soares, “Light Field View Synthesis Using Deformable Convolutional Neural Networks,” 2024 Pict. Coding Symp. PCS 2024 - Proc., pp. 11–15, 2024, doi: 10.1109/PCS60826.2024.10566360.
[36] R. Chen et al., “Multiplane-based Cross-view Interaction Mechanism for Robust Light Field Angular Super-Resolution,” IEEE Trans. Vis. Comput. Graph., vol. 14, no. 8, 2025, doi: 10.1109/TVCG.2025.3564643.
[37] J. Jin, J. Hou, J. Chen, H. Zeng, S. Kwong, and J. Yu, “Deep Coarse-to-Fine Dense Light Field Reconstruction with Flexible Sampling and Geometry-Aware Fusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 1819–1836, 2022, doi: 10.1109/TPAMI.2020.3026039.
[38] S. Yun, J. Jang, and J. Paik, “Geometry-Aware Light Field Angular Super Resolution Using Multiple Receptive Field Network,” 2022 Int. Conf. Electron. Information, Commun. ICEIC 2022, pp. 2–4, 2022, doi: 10.1109/ICEIC54506.2022.9748458.
[39] D. P. Kingma and J. L. Ba, “Adam: A Method for Stochastic Optimization,” 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR, pp. 1–15, Dec. 22, 2014. [Onlinr]. Available at: https://arxiv.org/abs/1412.6980v9.
[40] K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4D light fields,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10113 LNCS, no. 3, pp. 19–34, 2017, doi: 10.1007/978-3-319-54187-7_2.
[41] S. Wanner, S. Meister, and B. Goldluecke, “Datasets and benchmarks for densely sampled 4D light fields,” 18th Int. Work. Vision, Model. Vis. VMV 2013, pp. 225–226, 2013, [Online]. Available at: https://diglib.eg.org/items/4e39b2ff-5177-483f-b611-dfc43222f22c.
[42] A. S. Raj, M. Lowney, R. Shah, and G. Wetzstein, “Stanford Lytro Light Field Archive,” 2016.[Online]. Available at: https://lightfields.stanford.edu/LF2016.html.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0