View Synthesis for 3D Computer-Generated Holograms Using Deep Neural Fields
Optics Express 2025 | paper doi supplement video

   
overview

(A) Camera captures are used to optimize a scene representation model. (B) Light field elemental views are rendered by iteratively evaluating the radiance field at uniformly spaced camera positions. Epipolar slices show that the scene representation model can synthesize new views with high image quality. (C) The light field is converted to a complex wavefront by computing the inverse of the short-time Fourier transform (iSTFT). (D) The model weights of a CNN are optimized by backpropagating the errors of a focal stack loss. The ASM is used to simulate wavefront propagation at different distances from a reference plane. (E) Insets of model predictions for four test views are shown, with near and far focus.

Abstract

Computer-generated holography (CGH) simulates the propagation and interference of complex light waves, allowing it to reconstruct realistic images captured from a specific viewpoint by solving the corresponding Maxwell equations. However, in applications such as virtual and augmented reality, viewers should freely observe holograms from arbitrary viewpoints, much as how we naturally see the physical world. In this work, we train a neural network to generate holograms at any view in a scene. Our result is the Neural Holographic Field: the first artificial-neural-network-based representation for light wave propagation in free space and transform sparse 2D photos into holograms that are not only 3D but also freely viewable from any perspective. We demonstrate by visualizing various smartphone-captured scenes from arbitrary six-degree-of-freedom viewpoints on a prototype holographic display. To this end, we encode the measured light intensity from photos into a neural network representation of underlying wavefields. Our method implicitly learns the amplitude and phase surrogates of the underlying incoherent light waves under coherent light display conditions. During playback, the learned model predicts the underlying continuous complex wavefront propagating to arbitrary views to generate holograms.

Video


Citation

 @article{chen2025neuralholographicfields,
author = {Kenneth Chen and Anzhou Wen and Yunxiang Zhang and Praneeth Chakravarthula and Qi Sun},
journal = {Opt. Express},
keywords = {Holographic displays; Imaging techniques; Light propagation; Neural networks; Spatial light modulators; Wave propagation},
number = {9},
pages = {19399--19408},
publisher = {Optica Publishing Group},
title = {View synthesis for 3D computer-generated holograms using deep neural fields},
volume = {33},
month = {May},
year = {2025},
url = {https://opg.optica.org/oe/abstract.cfm?URI=oe-33-9-19399},
doi = {10.1364/OE.559364}
}
isca25 vr25 i3d25 sig24 asplos24 sca23 vr-energy-etech emg-energy vrenergy