Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR

Researches from Stony Brook University, NVIDIA, and Adobe have devised a system which hides so-called ‘redirected walking’ techniques using saccades, natural eye movements which act like a momentary blindspot. Redirected walking changes the direction that a user is walking to create the illusion of moving through a larger virtual space than the physical space would allow.

Update (4/27/18): The researchers behind this work have reached out with the finished video presentation for the work, which has been included below.

Original Article (3/28/18): At NVIDIA’s GTC 2018 conference this week, researchers Anjul Patney and Qi Sun presented their saccade-driven redirected walking system for dynamic room-scale VR. Redirected walking uses novel techniques to steer users in VR away from real-world obstacles like walls, with the goal of creating the illusion of traversing a larger space than is actually available to the user.

There’s a number of ways to implement redirected walking, but the strengths of this saccade-driven method is that it’s hidden from the user, widely applicable to VR content, and dynamic, allowing the system to direct users away from objects newly introduced into the environment, and even moving objects, the researchers say.

The basic principle behind their work is an exploitation of a natural quirk of human vision—saccadic suppression—to hide small rotations to the virtual scene. Saccades are quick eye movements which happen when we move our gaze from one part of a scene to another. Instead of moving in a slow continuous motion from one gaze point to the next, our eyes quickly dart about, when not tracking a moving object or focused on a singular point, a process which takes tens of milliseconds.

An eye undertaking regular saccades

Saccadic suppression occurs during these movements, essentially rendering us blind for a brief moment until the eye reaches its new point of fixation. With precise eye-tracking technology from SMI and an HTC Vive headset, the researchers are able to detect and exploit that temporary blindness to hide a slight rotation of the scene from the user. As the user walks forward and looks around the scene, it is slowly rotated, just a few degrees per saccade, such that the user reflexively alters their walking direction in response to the new visual cues.

This method allows the system to steer users away from real-world walls, even when it seems like they’re walking in a straight line in the virtual world, creating the illusion that the the virtual space is significantly larger than the corresponding virtual space.

A VR backpack allows a user at GTC 2018 to move through the saccadic redirected walking demo without a tether. | Photo by Road to VR

The researchers have devised a GPU accelerated real-time path planning system, which dynamically adjusts the hidden scene rotation to redirect the user’s walking. Because the path planning routine operates in real-time, Patney and Sun say that it can account for objects newly introduced into the real world environment (like a chair), and can even be used to steer users clear of moving obstacles, like pets or potentially even other VR users inhabiting the same space.

The research is being shown off in a working demo this week at GTC 2018. An academic paper based on the work is expect to be published later this year.

The post Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR appeared first on Road to VR.

NVIDIA to Present Latest Foveated Rendering Research at GTC 2017 in May

Held from May 8-11th in Silicon Valley at the San Jose Convention Center, NVIDIA’s GTC 2017 session schedule is chock full of deep tech talks that we’re looking forward to. Among them, Senior NVIDIA Research Scientist Anjul Patney will overview the company’s latest learnings from their study of the ‘perceptually-based’ approach to foveated rendering.

Simply put, foveated rendering in VR aims to render the highest quality imagery only at the center of your vision where your eye can detect sharp detail, while rendering low quality imagery in the periphery of your vision where your eye is not tuned to pick up high resolution details. Combined with eye-tracking, it’s widely believed that foveated rendering is an important pathway to unlocking retinal-resolution VR rendering in the near future (imagery which is so sharp that any additional detail would be indiscernible).

SEE ALSO
NVIDIA Demonstrates Experimental "Zero Latency" Display Running at 1,700Hz

But foveated rendering is still in its infancy, and early attempts at using simple blur masks over the peripheral view has been shown to be too visible and distracting; a bad approximation of the limits of our peripheral vision.

Last year, NVIDIA researchers demonstrated a compelling new approach to foveated rendering (they call it ‘perceptually based’) which aims to let the end experience of human perception drive the outcome of foveated rendering techniques, rather than the other way around. The new work, which involved a ‘contrast-preserving’ rendering approach, showed a major improvement in making foveated rendering difficult to notice, and was faster than other common techniques to boot.

At GTC 2017, one of the researchers leading NVIDIA’s investigations into perceptually based foveated rendering, Anjul Patney, will take to the stage to outline the latest developments. The session description reads:

Foveated rendering is a class of algorithms which increase the performance of virtual reality applications by reducing image quality in the periphery of a user’s vision. In my talk, I will present results from our recent and ongoing work in understanding the perceptual nature of human peripheral vision, and its uses in improving the quality and performance of foveated rendering for virtual reality applications. I will also talk about open challenges in this area.

Patney’s talk is just one of several deep technical talks that we’re looking forward to at GTC 2017.

Register for GTC 2017

Here’s a number of others that have caught our eye so far from NASA, Oculus, Pixvana, OTOY, NVIDIA and Stanford’s Computational Imaging Lab.

NASA’s Hybrid Reality Lab: One Giant Leap for Full Dive – Matthew Noyes, NASA

This session demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA’s GPUs to create highly immersive future astronaut training systems augmented with extremely realistic haptic feedback, sound, and additional sensory information, and how these can be used to improve the engineering workflow. Examples explored include a simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn about how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.

Light Field Rendering and Streaming for VR and AR – Jules Urbach, OTOY

Jules Urbach, Founder & CEO of OTOY will discuss OTOY’s cutting edge light field rendering toolset and platform. OTOY’s light field rendering technology allows for immersive experiences on mobile HMDs and next gen displays, ideal for VR and AR. OTOY is actively developing a groundbreaking light field rendering pipeline, including the world’s first portable 360 LightStage capture system and a cloud-based graphics platform for creating and streaming light field media for virtual reality and emerging holographic displays.

The Virtual Frontier: Computer Graphics Challenges in Virtual Reality – Morgan McGuire, NVIDIA

Video game 3D graphics are approaching cinema quality thanks to the mature platforms of massively parallel GPUs and the APIs that drive them. Consumer head-mounted virtual reality is a new domain that poses exciting new opportunities and challenges in a wide-open research area. We’ll present the leading edge of computer graphics research for VR across the field. It highlights emerging methods for reducing latency, increasing frame rate and field of view, and matching rendering to both display optics and the human visual system while maximizing image quality.

Insights From the First Year of VR – Jason Holtman, Oculus

There are a myriad of choices to make when jumping into VR development. We’ll explore how to navigate those decisions, and what the lessons from this first generation of VR content means for future titles.

Streaming 10K Video Using GPUs and the Open Projection Format – Sean Safreed, Pixvana

Pixvana has developed a cloud-based system for processing VR video that can stream up to 12K video at HD bit rates. The process is called field-of-view adaptive streaming (FOVAS). FOVAS converts equirectangular spherical format VR video into tiles on AWS in a scalable GPU cluster. Pixvana’s scalable cluster in the cloud delivers over an 80x improvement in tiling and encoding times. The output is compatible with standard streaming architectures and the projection is documented in the Open Projection Format. We’ll cover the cloud-architecture, GPU processing, Open Projection Format, and current customers using the system at scale.

Computation Focus-tunable Near-eye Displays – Nitish Padmanaban, Stanford Computational Imaging Lab

We’ll explore unprecedented display modes afforded by computational focus-tunable near-eye displays with the goal of increasing visual comfort and providing more realistic and effective visual experiences in virtual and augmented reality. Applications of VR/AR systems range from communication, entertainment, education, collaborative work, simulation, and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a pervasive source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display.


Road to VR is a proud media sponsor of GTC 2017

The post NVIDIA to Present Latest Foveated Rendering Research at GTC 2017 in May appeared first on Road to VR.