A curated list of resources on holographic displays, inspired by awesome-computer-vision.
This list is compiled during my paper survey about holographic displays, and is not meant to be exhuastive. The list is organized for me to easily navigate different topics in holography. I would like to thank the authors of the following papers for providing great references:
- Neural Holography with Camera-in-the-loop Training (Peng et al. 2020)
- Learned Hardware-in-the-loop Phase Retrieval for Holographic Near-Eye Displays (Chakravarthula et al. 2020)
- Background, Theory, and Survey
- Computer Generated Holography (CGH)
- Holographic Display Architectures and Optics
- Etendue, Eyebox, Pupil Related
- Labs and Researchers
- Introduction to Fourier Optics by Joseph W. Goodman is a great book to learn the basics of wave propagation and holography.
- Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective (Chang et al. 2020)
- Deep learning in holography and coherent imaging (Rivenson et al. 2019)
Some methods are based on the double phase encoding scheme, where two phase-only modulation patterns are interleaved on a single SLM:
- Computer-generated double-phase holograms (Hsueh et al. 1978) proposed to decompose a complex field into two phase-only components to generate holograms using a single phase-only SLM.
- Holographic Near-Eye Displays for Virtual and Augmented Reality (Maimone et al. 2017) proposed a holographic near-eye display system based on the double phase encoding scheme.
Others methods include:
- Near-eye Light Field Holographic Rendering with Spherical Waves for Wide Field of View Interactive 3D Computer Graphics (Shi et al. 2017)
- Computer-generated holograms of 3-D objects composed of tilted planar segments (Leseberg et al. 1988)
- Computer-generated holograms of tilted planes by a spatial frequency approach (Tommasi et al. 1993)
- Computer-generated holograms for three-dimensional surface objects with shade and texture (Matsushima 2005)
- Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method (Matsushima et al. 2009)
- Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique (Matsushima et al. 2014)
- Computer generated holograms from three dimensional meshes using an analytic light transport model (Ahrenberg et al. 2008)
- Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling (Jia et al. 2014)
- Computer-generated hologram with occlusion effect using layer-based processing (Zhang et al. 2017)
- Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method (Zhao et al. 2015)
- Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications (Chen et al. 2015)
- Computer generated hologram with geometric occlusion using GPU-accelerated depth buffer rasterization for three-dimensional display (Chen et al. 2009)
- Holographic near-eye displays based on overlap-add stereograms (Padmanaban et al. 2019)
- Layered holographic stereogram based on inverse Fresnel diffraction (Zhang et al. 2016)
- Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues (Zhang et al. 2015)
A family of iterative methods is based on the Gerchberg-Saxton (GS) Algorithm where the phase and amplitute patterns at two planes are updated iteratively as the wave propagates back and forth between the two planes:
- A practical algorithm for the determination of phase from image and diffraction plane pictures (Gerchberg et al. 1972) proposed the Gerchberg-Saxton (GS) Algorithm
- Mix-and-Match Holography (Peng et al. 2017) proposed an interative phase-retrieval method built upon GS
- Fresnel ping-pong algorithm for two-plane computer-generated hologram display (Dorsch et al. 1994)
Other optimization based methods leverage gradient descent or non-convex optimization techniques to optimize the phase pattern of the SLM:
- Hogel-free Holography(Chakravarthula et al. 2022)
- Multi-depth hologram generation using stochastic gradient descent algorithm with complex loss function (Chen et al. 2021)
- Realistic Defocus Blur for Multiplane Computer-Generated Holography (Kavaklı et al. 2021) proposed a novel loss function aimed to synthesize high quality defocus blur, and can be intergated in various iterative (GS, gradient-descent) and non-iterative (double phase encoding) methods.
- Wirtinger Holography for Near-Eye Displays (Chakravarthula et al. 2019) optimizes the phase-only SLM pattern using closed-form Wirtinger complex derivatives in gradient descent.
- 3D computer-generated holography by non-convex optimization (Zhang et al. 2017)
Unfortunately, iterative methods are inherently slow and thus not suitble for real-time CGH.
A major focus in deep learning for CGH is using camera-in-the-loop (CITL) training to learn an accurate free space wave propagation and optical hardware model for holographic displays:
- Neural Holography with Camera-in-the-loop Training (Peng et al. 2020) is the first to use camera-in-the-loop training (CITL) to optimize a parameterized wave propagation model, where optical aberrations, SLM non-linearities, and etc are learned from data. A CNN is also proposed to synthesize 2D and 3D holograms in real-time.
- Neural 3D Holography: Learning Accurate Wave Propagation Models for 3D Holographic Virtual and Augmented Reality Displays (Choi et al. 2021) uses two CNNs to directly model optical aberrations, SLM non-linearities, and etc, at the input plane and multiple target planes. The usage of two CNNs introduces more degrees of freedom than that of the parameterized propagation model proposed in Peng et al. 2020, such that higher quality 3D holograms can be achieved.
- Time-multiplexed Neural Holography: A Flexible Framework for Holographic Near-eye Displays with Fast Heavily-quantized Spatial Light Modulators (Choi et al. 2022) leveraged time-multiplexed quantized SLM patterns to synthesize high quality defocus blur.
- Learned Hardware-in-the-loop Phase Retrieval for Holographic Near-Eye Displays (Chakravarthula et al. 2020) uses CITL to learn an aberration approximator that models the residual between holograms generated from ideal wave propagation (i.e. ASM) and real-world wave propagation models. An adversarial loss is used in addition to reconstruction loss to optimize the synthesized holograms.
Instead of using a predetermined convolution kernel to compute wave propagation (i.e. the angular spectrum method), Learned holographic light transport (Kavaklı et al. 2021) learns the wave propagation convolution kernel directly from images captured by a physical holographic display.
Previous works assume a naive wave propagation model (i.e. the angular spectrum method), and directly regresses complex holograms using different CNN architectures:
- Towards real-time photorealistic 3D holography with deep neural networks (Shi et al. 2021)
- DeepCGH: 3D computer-generated holography using deep learning (Eybposh et al. 2020) uses a CNN to estimate a complex field at a fixed plane from a set of 3D target multiplane inputs; the complex field is then reverse propagated to the SLM plane to generate a phase pattern.
- Deep neural network for multi-depth hologram generation and its training strategy (Lee et al. 2020) directly estimates the SLM phase pattern from 3D target multiplane inputs using a CNN.
- Deep-learning-generated holography (Horisaki et al. 2018)
- Phase recovery and holographic image reconstruction using deep learning in neural networks (Rivenson et al. 2018)
Most CGH display frameworks use coherent light source (laser) and a single phase-only SLM. The following works explore alternatives to the current paradigm, such as using partially-coherent light sources, amplitude SLMs, and adding additional optical elements.
Partially-coherent light sources are used to reduce speckle artifacts:
- Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration (Peng et al. 2021) uses partially coherent light sources and camera-in-the-loop optimization to reduce speckle artifacts.
- Light source optimization for partially coherent holographic displays with consideration of speckle contrast, resolution, and depth of field
- Holographic head-mounted display with RGB light emitting diode light source (Moon et al. 2014)
Special optical elements are used to improve the holographic display quality:
- Monocular 3D see-through head-mounted display via complex amplitude modulation (Gao et al. 2016)
- Optimizing image quality for holographic near-eye displays with Michelson Holography (Choi et al. 2021)
- Holographic Optics for Thin and Lightweight Virtual Reality (Maimone et al. 2020)
- Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina (Jang et al. 2017)
- Holographic display for see-through augmented reality using mirror-lens holographic optical element (Li et al. 2016)
- 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation (Yeom et al. 2015)
Bulky headsets hamper the development of AR/VR. Reducing the size of holographic displays are important:
- Holographic Glasses for Virtual Reality (Kim et al. 2022) presents a holographic display system with eyeglasses-like form factor. An optical stack of 2.5mm is achieved by combining pupil-replicating waveguide, SLMs, and geometric phase lenses.
- Pupil-aware Holography (Chakravarthula et al. 2022)
- Neural Etendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display (Baek et al. 2022)
- High Resolution étendue expansion for holographic displays (Kuo et al. 2020)
- Holographic Near-eye Display with Expanded Eye-box (Jang et al. 2018)
- Computational Imaging Lab, Stanford University
- Computational Imaging Lab, Princeton University
- Computational Biophotonics Laboratory, UNC Chapel Hill
- Graphics and Virtual Reality Group, UNC Chapel Hill
- Computational Light Laboratory, University College London
- Computational Imaging Group, KAUST
- Optical Engineering and Quantum Electronics Lab, Seoul National University
- NVIDIA Research