NVIDIA researchers show off thin holographic VR glasses
A team of researchers from NVIDIA Research and Stanford has published a new paper demonstrating a pair of thin holographic VR glasses. The screens can display true holographic content, solving the vergence-accommodation problem. Although research prototypes demonstrating the principles have a much smaller field of view, the researchers say it would be simple to achieve a 120° diagonal field of view.
Released ahead of this year’s upcoming SIGGRAPH 2022 conference, a team of researchers from NVIDIA Research and Stanford demonstrated a near-eye VR display that can be used to display flat images or holograms in a compact form factor. The article also explores interconnected system variables that impact key viewing factors such as field of view, eyebox, and eye relief. Additionally, researchers are exploring different algorithms for optimal image rendering for the best visual quality.
Commercially available VR headsets haven’t improved much over the years, largely due to an optical constraint. Most VR headsets use a single screen and a single lens. In order to focus the light from the screen into your eye, the lens must be some distance from the screen; closer and the image will be blurry.
Eliminating this gap between lens and screen would unlock form factors previously impossible for VR headsets; understandably there has been a lot of R&D to explore how this can be done.
In the recently published NVIDIA-Stanford article, Holographic glasses for virtual realitythe team shows that they built a holographic display using a spatial light modulator combined with a waveguide rather than a traditional lens.
The team built both a large benchtop model – to demonstrate basic methods and experiment with different image rendering algorithms for optimal display quality – and a compact portable model to demonstrate the form factor. The images you see of the compact, glasses-like form factor don’t include the electronics to drive the display (because the size of that part of the system is out of reach for research).
You may recall a while ago that Meta Reality Labs released its own work on a compact, goggle-sized VR headset. Although this work involves holograms (to form the lenses of the system), it is not a “holographic display”, which means it does not solve the vergence-accommodation problem that is common in many many VR screens.
On the other hand, the Nvidia-Stanford researchers write that their holographic glasses system is actually a holographic display (through the use of a spatial light modulator), which they tout as a unique advantage of their approach. . However, the team also writes that it’s also possible to display typical flat images on the screen (which, like contemporary VR headsets, can converge for stereoscopic viewing).
Not only that, but the Holographic Glasses project boasts a thickness of just 2.5mm for the entire screen, significantly thinner than the 9mm thickness of the Reality Labs project (which was already incredibly thin!).
As with any good article, the Nvidia-Stanford team is quick to point out the limitations of their work.
For one, their handheld system has a tiny 22.8° diagonal field of view with an equally tiny 2.3mm eyebox. Both are far too small to be viable for a practical VR headset.
However, the researchers write that the limited field of view is largely due to their experimental combination of new components that are not optimized to work together. Drastically widening the field of view, they explain, is largely a matter of choosing complementary components.
“[…] the [system’s field-of-view] was mainly limited by the size of the available space [spatial light modulator] and the focal length of the GP lens, both of which could be improved with different components. For example, the focal length can be halved without significantly increasing the overall thickness by stacking two identical GP lenses and a circular polarizer [Moon et al. 2020]. With a 2” SLM and a 15mm focal length GP lens, we could achieve a monocular field of view of up to 120°”
As for the 2.3mm eyebox (the volume in which the rendered image can be seen), it’s far too small for practical use. However, the researchers write that they experimented with a simple way to extend it.
With the addition of eye tracking, they show that the eyebox could be dynamically enlarged by up to 8mm by changing the angle of the light sent into the waveguide. Granted, 8mm is still a very narrow eyebox and may be too small for practical use due to variations in eye relief distance and how the glasses sit on the head from user to user. other.
But, there are variables in the system that can be adjusted to change key display factors, like eyebox. Through their work, the researchers established the relationship between these variables, providing clear insight into the trade-offs that need to be made to achieve different outcomes.
As they show, the size of the eyebox is directly related to the pixel pitch (distance between pixels) of the spatial light modulator, while the field of view is related to the overall size of the spatial light modulator. The limits of eye relief and convergence angle are also shown, relative to less than 20 mm eye relief (which the researchers consider to be the upper limit of a true “spectacles” form factor).
An analysis of this “commercial space of design”, as they call it, was a key part of the article.
“With our design and experimental prototypes, we hope to stimulate new research and engineering directions toward ultra-thin VR displays that can be worn all day with form factors comparable to conventional eyewear,” they write.
The article is attributed to researchers Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes and Gordon Wetzstein.