NVIDIA Adds Eye-tracking Support to VRSS Foveated Rendering Tech

NVIDIA is upgrading its Variable Rate Supersampling (VRSS) with support for headsets with eye-tracking, allowing the rendered application to improve performance by increasing quality where the user is looking, while decreasing it elsewhere.

Nvidia today announced the latest version of VRSS, a foveated rendering implementation that works with any of the company’s RTX series GPUs and any application which supports DirectX 11, forward rendering, and MSAA.

The first version of VRSS only offered static foveated rendering which increased the effective resolution at the center of the image (where the lens is the sharpest), while decreasing the quality outside of the central area, effectively concentrating the GPU power where it matters most. The foveated rendering region can be supersampled up to 8x.

Image courtesy NVIDIA

VRSS 2 adds support for dynamic foveated rendering which allows the system to move the supersampled area to wherever the eye is looking. Although lens sharpness drops off as the user looks away from the center of the lens, there can still be perceptual benefits to supersampling outside the center of the lens.

This of course only works for headsets equipped with eye-tracking tracking, which is not common in consumer-grade VR headsets today, but is expected to become more widespread in the future.

Out of the gate, Nvidia says that the dynamic foveated rendering in VRSS 2 will support HP’s new Reverb G2 Omnicept Edition headset. In the future we hope to see support added for HTC’s Vive Pro Eye and Varjo headsets, both of which include eye-tracking hardware.

VRSS 2 is supported as of GeForce driver version R465 which became available on March 30th. Users must enable VRSS via the Nvidia Control Panel (Manage 3D Settings > Global Settings > Virtual Reality – Variable Rate Supersampling > Adaptive).

Although eye-tracking headsets themselves appear to require per-headset integrations to support dynamic foveated rendering with VRSS 2, Nvidia says that applications don’t need to be modified in any way to get the benefits of VRSS 2, provided they support DirectX 11, forward rendering, and MSAA. That’s a good thing because it means developers don’t need to rely on any technology that’s specific to Nvidia GPUs in order to benefit from VRSS 2.

Developers with compatible titles need only to submit their application to Nvidia for consideration. If the application benefits from VRSS 2, Nvidia will whitelist the app to use VRSS 2 in a future driver update.

Nvidia today also published a new list with all games currently supporting VRSS:

Games Supporting NVIDIA VRSS – April 12th, 2021
Battlewake Raw Data
Boneworks Rec Room
Budget Cuts 2: Mission Insolvency Rick & Morty: Virtual Rick-ality
Doctor Who: The Edge of Time Robo Recall
Eternity Warriors VR Sairento VR
Hot Dogs, Horeshoes, & Hand Grenades Serious Sam VR: The Last Hope
In Death Skeet: VR Target Shooting
Job Simulator Sniper Elite VR
Killing Floor: Incursion Space Pirate Trainer
L.A. Noire VR Special Force VR: Infinity War
Lone Echo Spiderman Far From Home
Medal of Honor: Above and Beyond Spiderman Homecoming VR
Mercenary 2: Silicon Rising Talos Principle VR
Onward VR The Soulkeeper VR
Pavlov VR The Walking Dead: Saints & Sinners
PokerStars VR VRChat

The post NVIDIA Adds Eye-tracking Support to VRSS Foveated Rendering Tech appeared first on Road to VR.

Nvidia Develops Lightweight VR Gaze Tracking System Using LED Sensors

All of the world’s top virtual reality headset makers agree that gaze tracking is going to be fundamental to the next generation of VR hardware, as the ability to sense an eye’s position in real time enables a computer to optimize detail rendering, and even offer cursor controls without the need for hand or head movements. But gaze tracking hardware currently is neither cheap nor small, so researchers at Nvidia have come up with a novel solution that could enable the technology to become more widespread.

Nvidia’s new gaze tracker uses a capability of common LEDs — their ability to both emit and sense light — to simplify the process of determining the eye’s position relative to a display. Like other gaze tracking systems, the Nvidia system uses a ring of infrared LEDs to project unseen light into the eye, but here, LEDs also are used for color-selective sensing from the same location. This enables the smallest and lowest-cost gaze tracking yet developed, the researchers note, while matching the accuracy and sampling rates of today’s most common solutions.

In one prototype, Nvidia uses a total of nine LEDs per eye, with three emitting IR light and six sensing the light, while a second prototype uses six LEDs per eye as both light sensors and light sources. Because the LEDs consume little power and rely on comparatively simple controller hardware and software, they cut overall latency, reduce the number of cameras needed by the headset, and remove the need for an extra image processing block within the headset’s pipeline.

Although Nvidia’s solution is performant enough to work for typical VR applications, the researchers caution that it might not be suitable for reading, neurological applications, or psychological research. While the LED system has a “good” median angular error of 0.7 degrees and a mean angular error as low as 1.1 degrees, camera-based alternatives can deliver “very high accuracy” results with error levels under 0.5 degrees. Nvidia also notes that its initial calibration phase is “comparably longer” versus other solutions, which in many cases use a “look here, here, here, and here” system to sync with eyes, and must recalibrate if the wearer’s face moves relative to the sensing hardware.

Nvidia’s gaze-sensing LED system is still in the prototype stage, so it’s not yet ready to challenge the Tobii solutions found in HTC and Pico VR headsets, or the 7invensun alternative selected for Nreal Light. But it could make its way into the next generation of VR headsets, enabling a new class of inexpensive and lightweight models with greater performance — assuming the researchers can find ways to make the calibration process fast enough not to annoy users.

This post by Jeremy Horwitz originally appeared on VentureBeat. 

The post Nvidia Develops Lightweight VR Gaze Tracking System Using LED Sensors appeared first on UploadVR.

Oculus Quest Gets Dynamic Fixed Foveated Rendering To Balance Quality & Performance

The Oculus Quest now has a Dynamic Fixed Foveated Rendering (FFR) feature, which developers can use instead of manually setting the FFR level.

UPDATE April 28: this feature is now available for Unity, the game engine used for the majority of Oculus Quest content.

This article was originally published December 20.

Fixed Foveated Rendering is a rendering feature that developers can use on Oculus Quest. It renders the peripheral of the lenses at a lower resolution than the center, making it easier for the software to maintain a consistent and comfortable frame rate by shaving down detail in places that are less noticeable. There are four levels of FFR developers can choose from: Low, Medium, High, and High Top.

FFR can make it easier for developers to port their PC VR games to Quest. However, the High and High Top can be very noticeable for the user. As we stated in our review of the Quest headset:

In the game’s opening training montage I couldn’t help but point my eyes down and see two blurs for feet running on a treadmill. Tilting my head up over text to move it into the foveated area revealed the scale and size of the effect

Dynamic FFR allows developers to let the Oculus system dynamically adapt the level of foveation based on the GPU utilization. This means that unless it is needed at that time for performance, users won’t see the pixelation and blur seen in some Quest titles today.

The feature is off by default, however, so developers will need to add it to their games via a software update to get the benefits.

For Unity, this can be done by setting useDynamicFixedFoveatedRendering to true on the OVRManager script.

The post Oculus Quest Gets Dynamic Fixed Foveated Rendering To Balance Quality & Performance appeared first on UploadVR.

New Unity Plug-In Claims To Boost Visual Clarity Inside Vive Pro Eye

Vive Pro Eye’s eye-tracking technology already employs foveated rendering to provide visually richer VR experiences. But now one company claims it can push those results even further without any additional hardware.

Digital imaging company Almalence just announced the Digital Lens for Vive Pro Eye. It’s a Unity plug-in that accesses the headset’s eye-tracking data. Almalence says it takes this data and “increases the visible resolution and removes chromatic aberrations across the entire field of view”.

Chromatic aberration is a term that refers to distorted images caused by wavelengths of colour not reaching the same focal plane. It can lead to blurry images with colored edges.

Here’s an image the company itself provided of the Digital Lens’ effects. Note that we haven’t seen the Lens at work for ourselves; we can’t verify if the effect is really as strong as this image suggests.

And here’s another found on their website. Again, this is all materials Almalence itself provides.

The plug-in is available now for free to Vive Pro Eye users upon request as part of a testing phase. Potential commercial contracts will be discussed on a case-by-case basis.

Given its dependence on eye-tracking, it won’t make any difference to other VR headsets out there. Almalence also says the plug-in adds less than a millisecond of latency to VR.

Of course, Vive Pro Eye is an enterprise-level headset, so it won’t mean much for VR fans at home. But we’ll definitely be interested to see if developers and companies using the plug-in discover a big improvement or not.

The post New Unity Plug-In Claims To Boost Visual Clarity Inside Vive Pro Eye appeared first on UploadVR.

Tobii Making Foveated Rendering Eye-Tracking Tech Available To New Headsets

Earlier this year Tobii and HTC Vive partnered to bring foveated rendering tech to the new HTC Vive Pro Eye. Now, Tobii is opening its platform up for others to use.

At Siggraph this week the company announced Tobii Spotlight Technology. It’s essentially the same tech already utilized in Vive Pro Eye. Tobii’s eye-tracking technology is able to decipher the specific area of a VR display the user is looking at. The headset then only fully renders the direct center of that area. Areas away from the center of your vision aren’t fully rendered. This is imperceptible to your peripheral vision.

This drastically reduces the strain on hardware processing a VR experience. As such, foveated rendering is largely considered to be one of the key components of bringing VR costs down in the future. A Tobii spokesperson told UploadVR that “Spotlight Technology is intended to support a variety of headsets, including both tethered and standalone headsets.” News on software development kits (SDKs) for Spotlight will also be coming “soon.”

Specific partners weren’t announced today. Vive Pro Eye is an enterprise-level headset, though. Hopefully this news means we’ll start to see eye-tracking in other, consumer-focused devices soon.

Tobii did provide its own benchmarking results for using dynamic foveated rendering in Epic’s ShowdownVR app with the Vive Pro Eye running on Nvidia RTX 2070. You can see those results above, though obviously take note that these are company-generated stats and not something we can verify ourselves.

The post Tobii Making Foveated Rendering Eye-Tracking Tech Available To New Headsets appeared first on UploadVR.

Editorial: Foveated Rendering Is Essential To Consumer VR’s 2nd Generation

foveated rendering

Three years into consumer virtual reality, the technology is still in its first generation. While minor improvements are on the near horizon, there’s a bottleneck holding back a true next generation.

That bottleneck is the development of (good) foveated rendering.

Resolution and FOV: Fundamental Enemies

Almost all consumer headsets today have a field of view of roughly 100 degrees horizontal. For VR to be more immersive, that needs to increase- human vision is around 210 degrees. But the resolution of today’s headsets isn’t good enough either- in fact, you can still visibly see the pixels and small text is unreadable.

The fundamental problem in significantly improving these specifications is that the wider the field of view, the lower the angular resolution. Angular resolution is the number of pixels per degree — this is how we actually perceive the resolution of displays. That’s why your TV looks great far away, but low detail up close.

Diagram from Oculus Connect 3

A 200 degree headset would have half the angular resolution of a 100 degree one with the same display. This means that by using a display with twice the number of pixels, you would still only get the same low detail as today’s headsets if it was spread over the full range of human vision.

Display panels with the resolution needed will exist soon — that’s just a matter of time. But the problem is in finding a way for your graphics hardware to actually drive them when they arrive.

Even if the goal is only a 50 percent increase in field of view and angular resolution, that would require approximately 4x the number of pixels drawn as current VR. The only GPU that could run such a headset at full performance on existing games would be the TITAN RTX– obviously impractical.

And if you had a headset with 200 degrees field of view and twice the angular resolution of today? That would require 16x the pixels drawn by the graphics hardware dozens of times every second. No GPU existing today could handle such a task, and at the current rate of progression it would be more than five years until one emerged. That could mean 10 years until the hardware became affordable.

Foveated Rendering To The Rescue

There’s a solution to this bottleneck. The human eye is only high resolution in the very center. Notice as you look around the room that only what you’re directly looking at is in high detail. Everything around that area isn’t as crystal clear. In fact, that “foveal area” is just 3 degrees wide.

VR headsets can take advantage of this by only rendering where you’re directly looking in high resolution. Everything else can be rendered at a significantly lower resolution. This is called foveated rendering, and is what will allow for significantly higher resolution displays. This, in turn, should enable significantly wider field of view.

Foveated rendering relies on eye tracking. In fact, that eye tracking needs to be essentially perfect. Otherwise, there would be distracting delays in detail when looking around. Not all foveated rendering solutions are created equal. The better the eye tracking, the more gains can be found in rendering efficiencies.

Pimax: A Case Study

The Pimax 8K and 5K headsets offer a case study on the need for foveated rendering. The headsets offer a resolution of 2560×1440 per eye to enable a 170 degree field of view.

Despite originally claiming the headset would run on a GTX 1070, Pimax later changed its tune to say that GPU wouldn’t be enough.

Tom’s Hardware recently benchmarked the headset using the $700 RTX 2080. While they were able to run simplistic games like Space Pirate Trainer smoothly, games like Arizona Sunshine and Serious Sam VR required turning the field of view down to 120° and setting the resolution far below native.

Pimax plans to release an eye tracking add-on for its headsets later this year, which the company claims will enable foveated rendering. We’re eager to try it out.

What’s In A Generation Anyways?

The definition of a “generation” is purely semantics. In the world of game consoles, a generation denotes a substantial improvement and happens every five years or so. But in the world of smartphones, “generations” are yearly.

To me, a new generation of VR implies a large improvement of specifications and the addition of new features. Minor resolution bumps, in my view, don’t count. When the next generation of VR truly releases, there won’t be any question about whether it fits the description.

The test I propose for generational leaps in VR is that if your long distance cousin who only visits at Christmas were to try your new headset, would they notice the difference from last Christmas? They’re unlikely to notice a minor specification bump, but they’ll notice significant improvements and the addition of new features.

So When Will It Be Available?

At CES in January HTC announced Vive Pro Eye — a refresh of the Vive Pro adding eye tracking. Vive Pro Eye will be one of the first headsets on the market with foveated rendering, though there have been developer kits in the past.

Vive Pro Eye doesn’t have a wider field of view than other headsets. Instead, it is used to enable supersampling. That’s where you render at a higher resolution than the display to get a higher quality image with less (or no) aliasing. Whether this decision was due to the quality of the eye tracking or other unrelated reasons is unclear.

At Oculus Connect 5 in October, Facebook showed off their progress on foveated rendering. The company’s chief VR researcher Michael Abrash showed off a new approach using machine learning to fill in the low resolution areas, allowing for a tiny foveal area. The example Abrash showed only required 5% of the display resolution to be rendered- a 20x saving. He claimed the result is visually “indistinguishable” from normal rendering in VR.

Abrash concluded with a prediction that this technology would be ready in four years — 2022. Based on Abrash’s other predictions in past Oculus Connects, it seems likely that Facebook is waiting on this technology to enable a true next generation Oculus Rift. In the meantime, it seems like we’ll have a mid-generation refresh to hold us over.

Tagged with: , , ,

The post Editorial: Foveated Rendering Is Essential To Consumer VR’s 2nd Generation appeared first on UploadVR.

Pimax 8K: Auslieferung verzögert sich; Neue Beta-Software Brainwarp 1.0 veröffentlicht

Pimax fordert noch etwas mehr Geduld von seinen Backern, denn die Auslieferung der Pimax 8K verzögert sich aufgrund neuer Qualitätsstandards des Unternehmens. Demnach sollen die Produktion und der Versand erst ab dem 10. Februar beginnen. Dafür dürfen sich Besitzer einer Pimax-Brille über die neue Beta-Software Brainwarp 1.0 freuen, welche dank den neuen Features Smart Smoothing und Foveated Rendering die Performance deutlich optimieren sollen.

Pimax 8K – Produktion und Auslieferung verzögert sich bis zum 10. Februar

Im Gegensatz zur Pimax 5K+ verzögert sich die Produktion und Auslieferung der Pimax 8K um einige weitere Wochen. Dies verkünden die Verantwortlichen im offiziellen Pimax-Forum und entschuldigen sich zeitgleich bei den Backern für die Umständlichkeiten. Grund für die längere Wartezeit sind höhere Qualitätsstandards, welche erforderlich sind, bevor die Endgeräte an die Kunden ausgeliefert werden können. Dies soll Mängel im Produkt und dadurch entstehende Rücksendungen verhindern.

Im offiziellen Statement heißt es entsprechend:

Wir haben Qualität über Quantität gestellt und wir hoffen, dass Sie uns unterstützen werden, da wir sicherstellen, dass wir Ihnen die besten Geräte liefern, die wir bauen können.”

Die neue Norm veranlasste das Unternehmen zahlreiche 4K-LCD-Displays an Zulieferer zurückzusenden, wodurch eine Produktionslücke entstand. Dadurch konnten viele geplante Auslieferungen nicht durchgeführt werden. Aufgrund des anstehenden chinesischen Frühjahrsfests vom 3. bis 10. Februar ist das zuständige Logistikzentrum geschlossen und es können keine weiteren VR-Brillen fertiggestellt werden. Danach sollen die restlichen Geräte mit Hochdruck bearbeitet und versendet werden.

Pimax Brainwarp 1.0 – Beta-Software mit neuen Features: Smart Smoothing und Foveated Rendering

Positive Nachrichten gibt es dagegen Software-seitig, denn das Unternehmen veröffentlichte die neue Beta-Software Brainwarp 1.0 mit neuen Features für ein ordentliches Performance-Upgrade. So veröffentlicht Pimax sein Äquivalent zu Oculus Asynchronous SpaceWarp und dem Motion Smoothing von Valve. Das Smart Smoothing aktiviert sich automatisch bei einem Framerate-Einbruch unter 90 FPS und sorgt für eine bessere Bildrate bzw. Darstellung der VR-Inhalte. Die Framerate ist sowohl für die 5K+- wie auch für die 8K-Brille individuell festlegbar.

Foveated Rendering

Ein Beispiel für Foveated Rendering von SMI

Als zweite Funktion beinhaltet das Update das Foveated Rendering für Nvidia-RTX-GPUs in drei verschiedenen Stufen. Die Funktion ermöglicht es, nur einen bestimmten Bildausschnitt im Sichtbereich in voller Auflösung zu rendern, um Rechenleistung zu sparen. Der vom menschlichen Auge kaum wahrnehmbare Rand wird zeitgleich in der Auflösung reduziert. Das Feature ist in den drei Stufen conservative, ballanced und aggressive nutzbar. Derzeit sind beim Einsatz allerdings Spielabstürze bzw. Darstellungskomplikationen möglich. In diesen Fällen empfiehlt Pimax einen Neustart der Anwendung.

(Quellen: Pimax Forum: 1 | 2 | Upload VR)

Der Beitrag Pimax 8K: Auslieferung verzögert sich; Neue Beta-Software Brainwarp 1.0 veröffentlicht zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Google: Patent für Foveated Compression veröffentlicht

Google hat ein Patent erhalten, bei dem es speziell um die Kompression bei der Verwendung von Foveated Rendering geht. Beim Foveated Rendering wird nur der Bildausschnitt mit der vollen Auflösung gerendert, welcher von den Nutzer/-innen auch tatsächlich betrachtet wird.

Patent für Foveated Compression veröffentlicht

Damit eine Software weiß, welchen Punkt ihr im Bild anschaut, sind Technologien wie Eye-Tracking nötig. Ist eine solche Technologie vorhanden, ergeben sich viele Vorteile, da weniger sichtbare Bereiche mit einer geringeren Auflösung gerendert werden können. Dies spart Rechenleistung und ermöglicht eine bessere Darstellung im sichtbaren Ausschnitt.

Das neue Patent beschreibt, dass in einem VR-Headset die Bandbreite zwischen Display und Chip begrenzt ist und eine Erhöhung zu einem deutlich höheren Stromverbrauch führe. Zudem seien aktuelle Lösungen für die Kompressionen nicht dafür ausgelegt, unterschiedliche Schärfen innerhalb eines Frames anzuzeigen. Durch Foveated Compression soll es jedoch ermöglicht werden, eine Komprimierung zu verwenden, die den fokussierten Bereich quasi ohne spürbaren Verlust auf die Brille bringt und gleichzeitig die Bildung von Artefakten in weniger sichtbaren Bereichen verhindert. Hierfür sei jedoch auch ein spezieller Chip notwendig, der laut Google jedoch als “relativ simpel” beschrieben wird.

Eingereicht wurde die Idee bereits 2017, doch erst jetzt wurde das entsprechende Patent veröffentlicht. Es könnte also nicht mehr lange dauern, bis wir erste VR-Brillen sehen, die auf die Foveated Compression von Google zurückgreifen.

(Quelle: UploadVR)

Der Beitrag Google: Patent für Foveated Compression veröffentlicht zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Google ‘Foveated Compression’ Patent Filing Published

google compression patent

Alphabet’s Google filed for a patent for a compression system specifically designed for frames produced by foveated rendering.

Foveated rendering is a process which renders most of the view of a VR headset at lower resolution except for the exact area where the user’s eye is pointed, which is detected with eye tracking. That area in front of the eye — where humans perceive the greatest detail — is rendered at a significantly higher resolution. Foveated rendering is considered crucial for future advancement of VR as it allows for higher resolutions without impossible GPU requirements.

So why compress the frame? Why not simply send the result to the headset as is?

The patent explains that in a standalone headset, the data lanes from the SoC (system-on-chip) to the display have limited bandwidth. Increasing this banwdith would have a non-trivial effect on energy consumption. Specifications like DisplayPort include an optional compression system already, however the algorithms behind it were not designed for elements of varying visual acuity in a single frame.

The new compression system described gives priority to elements within the high detail area, where the result should be “virtually lossless”. Combining the high and low detail images without visible artefacts is described as requiring a custom chip. Thankfully however this chip is described as “relatively simple”.

While the patent application was published this week, its filing date is July 2017. The patent is seemingly based on a late 2017 paper from Google Research titled ‘Strategies for Foveated Compression and Transmission’.

The project was led by Dr Behnam Bastani, who led Google’s entire VR rendering research effort. In 2018 Bastani moved to Facebook to work in the FRL division led by Michael Abrash. This seems to follow an increasing trend of Facebook poaching top VR talent from Google and Microsoft.

Tagged with: ,

The post Google ‘Foveated Compression’ Patent Filing Published appeared first on UploadVR.