Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR

Researches from Stony Brook University, NVIDIA, and Adobe have devised a system which hides so-called ‘redirected walking’ techniques using saccades, natural eye movements which act like a momentary blindspot. Redirected walking changes the direction that a user is walking to create the illusion of moving through a larger virtual space than the physical space would allow.

Update (4/27/18): The researchers behind this work have reached out with the finished video presentation for the work, which has been included below.

Original Article (3/28/18): At NVIDIA’s GTC 2018 conference this week, researchers Anjul Patney and Qi Sun presented their saccade-driven redirected walking system for dynamic room-scale VR. Redirected walking uses novel techniques to steer users in VR away from real-world obstacles like walls, with the goal of creating the illusion of traversing a larger space than is actually available to the user.

There’s a number of ways to implement redirected walking, but the strengths of this saccade-driven method is that it’s hidden from the user, widely applicable to VR content, and dynamic, allowing the system to direct users away from objects newly introduced into the environment, and even moving objects, the researchers say.

The basic principle behind their work is an exploitation of a natural quirk of human vision—saccadic suppression—to hide small rotations to the virtual scene. Saccades are quick eye movements which happen when we move our gaze from one part of a scene to another. Instead of moving in a slow continuous motion from one gaze point to the next, our eyes quickly dart about, when not tracking a moving object or focused on a singular point, a process which takes tens of milliseconds.

An eye undertaking regular saccades

Saccadic suppression occurs during these movements, essentially rendering us blind for a brief moment until the eye reaches its new point of fixation. With precise eye-tracking technology from SMI and an HTC Vive headset, the researchers are able to detect and exploit that temporary blindness to hide a slight rotation of the scene from the user. As the user walks forward and looks around the scene, it is slowly rotated, just a few degrees per saccade, such that the user reflexively alters their walking direction in response to the new visual cues.

This method allows the system to steer users away from real-world walls, even when it seems like they’re walking in a straight line in the virtual world, creating the illusion that the the virtual space is significantly larger than the corresponding virtual space.

A VR backpack allows a user at GTC 2018 to move through the saccadic redirected walking demo without a tether. | Photo by Road to VR

The researchers have devised a GPU accelerated real-time path planning system, which dynamically adjusts the hidden scene rotation to redirect the user’s walking. Because the path planning routine operates in real-time, Patney and Sun say that it can account for objects newly introduced into the real world environment (like a chair), and can even be used to steer users clear of moving obstacles, like pets or potentially even other VR users inhabiting the same space.

The research is being shown off in a working demo this week at GTC 2018. An academic paper based on the work is expect to be published later this year.

The post Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR appeared first on Road to VR.

Hands-on with the Holodeck: Ready Player One – Escape Room

The motion-picture release of Ready Player One has, unsurprisingly, a number of virtual reality (VR) tie-ins to accompany it. One of those which home users may not get to jump into any time soon is NVIDIA’s own Ready Player One – Escape Room, which premiered at the company’s GTC 2018 event, San Jose.

Ready Player One HeroAs the name suggests, Ready Player One – Escape Room is a puzzle challenge set in a single room. In the version available at GTC 2018 up to three people could enter the space at one time, playing with HTC Vive Pro head-mounted displays (HMDs), with the idea that they would work together to solve a series of puzzles by interacting with objects within a virtual recreation of an area from the Ready Player One universe; namely a living room with a decidedly 1980s feel.

The objects players can interact with are limited; if you can touch it, it’s likely part of a puzzle you will have to solve. The team of players has 10 minutes to search the room for the correct objects at the correct time, and with even a small amount of knowledge of 1980s popular culture the challenges are relatively simple.

To begin with, there are a group of VHS cassette tapes laid on a counter with series of chalk outlines for those which are missing, with the task simply being to find and place the remaining tapes. Solving this clue makes a Batman symbol project from a Rubik’s Cube, requiring you to insert the VHS cassette tape of Tim Burton’s 1989 Batman motion-picture into the VHS player, which then makes a poster fall off the wall to reveal a Thundercats logo. As you can tell from this short explanation (which sadly is about half of the experience) the puzzles are far from challenging.

Ready Player One - HolodeckOf course, with NVIDIA’s Holodeck it’s less about the pre-designed experience and more about the player interaction. To that end, Ready Player One – Escape Room is one of the most enjoyable co-operative VR videogames around. With full voice communication and the ability to hand objects to one another, Ready Player One – Escape Room’s game of what is essentially an Easter egg hunt is most certainly collaborative. Six eyes are better than two, after all.

Ready Player One – Escape Room is also a visual treat, aptly showcasing the additional clarity in resolution between the HTC Vive Pro and the HTC Vive. Being able to read the text on posters and magazine covers as well as make out incidental details in the environment – showcasing the attention to detail of the artists who worked on the project – makes a huge difference to the level of immersion. A question has to be asked regarding the design of the players’ avatars – the clean white-and-green visage is certainly a modern interpretation and ill-fitting with the 1980’s aesthetic prevalent elsewhere – but it’s unlikely that many partaking in Ready Player One – Escape Room as one of their first VR experiences will suffer much umbrage from this disparity.

With social VR having become a key talking point within the industry of late it’s important that co-operative experiences such as Ready Player One – Escape Room take centre stage, showcasing that even the simplest of experiences can be improved by the added human factor. It’s a shame then, that NVIDIA doesn’t currently have any plans to showcase the piece within the public domain. This may of course change down the line, and VRFocus most certainly hopes that in time an expanded version of Ready Player One – Escape Room will be offered to VR’s early adopters at home.

 

 

Liveblog: GTC 2018 – ‘Extreme Multi-View Rendering for Light-Field Displays’

VRFocus is once again providing liveblog coverage of sessions (where we can) at this year’s GPU Technology Conference (GTC) hosted by NVIDIA in San Jose California.  At GTC we’re expecting a number of sessions that will be touching on the fields of virtual reality (VR), augmented reality (AR) and also how mixed reality (MR) and related technologies might fit into the creative mix of both the present and future.

Next up, Thomas Burnett CTO of FoVI3D takes to the stage.  “A light-field display projects a 3D aerial scene that is visible to the unaided eye without glasses or head tracking and allows for the perspective correct visualization of the scene within the display’s projection volume. The light-field display computes a synthetic radiance image from a 3D scene/model and projects the radiance image through a lens system to construct the 3D aerial scene. Binocular disparity, occlusion, specular highlights, gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field. There are a few processes for generating the synthetic radiance image; the difference between the two most common rasterization approaches is the order in which they decompose the 4D light-field (two dimensions of position, two dimensions of direction) into 2D rendering passes. This talk will describe Double Frustum and Oblique Slice and Dice synthetic radiance image rendering algorithms and their effective use for wide-area light-field displays.”

Your liveblogger for the event is Kevin Joyce.

Liveblog: GTC 2018 – ‘Light Field Rendering and Streaming for VR and AR’

VRFocus is once again providing liveblog coverage of sessions (where we can) at this year’s GPU Technology Conference (GTC) hosted by NVIDIA in San Jose California.  At GTC we’re expecting a number of sessions that will be touching on the fields of virtual reality (VR), augmented reality (AR) and also how mixed reality (MR) and related technologies might fit into the creative mix of both the present and future.

Our second covered talk today comes from OTOY and is being held by the company’s CEO Jules Urbach. Urbach is currently busy working on two latest ventures, OTOY and LightStage, which aim to revolutionise 3D content capture, creation, and delivery.

“We’ll discuss OTOY’s cutting-edge light field rendering toolset and platform, which allows for immersive experiences on mobile HMDs and next-gen displays, making it ideal for VR and AR. OTOY is developing a groundbreaking light field rendering pipeline, including the world’s first portable 360 LightStage capture system and a cloud-based graphics platform for creating and streaming light field media for VR and emerging holographic displays.”

Your liveblogger for the event is Kevin Joyce.

Liveblog: GTC 2018 – ‘Tackling the Realities of Virtual Reality’

VRFocus is once again providing liveblog coverage of sessions (where we can) at this year’s GPU Technology Conference (GTC) hosted by NVIDIA in San Jose California.  At GTC we’re expecting a number of sessions that will be touching on the fields of virtual reality (VR), augmented reality (AR) and also how mixed reality (MR) and related technologies might fit into the creative mix of both the present and future.

First up today is a talk featuring David Luebke – Vice President of Graphics Research, NVIDIA. Luebke helped found NVIDIA Research in 2006 after eight years as a professor at the University of Virginia, where his research helped pioneer the use of GPUs for general-purpose computing. He will be discussing the work the team have been doing with VR, and how limitations and developments continue to shape the technology.

“David Luebke, NVIDIA’s VP of Graphics Research, will describe NVIDIA’s vision for the future of virtual and augmented reality. Luebke will review some of the “realities of virtual reality”: challenges presented by Moore’s Law, battery technology, optics, wired and wireless connections. He will then discuss the implications and opportunities presented by these challenges, such as foveation and specialization, and conclude with a deep dive into how rendering technology, such as ray tracing, can evolve to solve the realities of virtual reality.”

Your liveblogger for the event is Kevin Joyce.

Driving a Real Car in VR Takes a Step Towards Reality with NVIDIA’s Holodeck

Racing around tracks in virtual reality (VR) can be great fun, really showcasing how immersive the technology can be. But how about taking that principle and combining it with the real world, so you’re actually driving a real car. Well that’s just what NVIDIA has been showing off today during CEO Jensen Huang’s keynote address at the GPU Technology Conference (GTC) 2018.

Nvidia Holodeck Car1

A big part of NVIDIA’s keynote address was focused on self-driving cars and its AI autonomous vehicle platform DRIVE, which has been designed to help automakers create safe self-driving vehicles by putting the AI through a series of virtual environments, simulating various hazards along the way.

But that’s not all the company has been working on. Combining NVIDIA’s Holodeck with a VR setup including a HTC Vive, racing seat and steering wheel, plus a car kitted out to control it remotely, NVIDIA showcased someone inside the convention hall driving a car parked outside whilst wearing the headset.

What they could see was a digital representation of the car inside Holodeck, with cameras side the vehicle providing a view of the outside world. They could then drive the car around as if they were playing an actual videogame, albeit very slowly as they were in a car park and didn’t want to hit anyone.

Nvidia Holodeck Car2

It’s certainly an interesting use of the technology and could have massive uses in the future when it comes to sports. Drivers would no longer be in danger from crashing so it would become very safe. As such safety features for the cars could be dispensed with, making them lighter and faster. On the extreme side, emulating films like Death Race, the cars could be equipped with all sorts of offensive and defensive capabilities to make it a visual spectacle for the audience.

Drone racing for example already uses VR headsets so that racers all sit together in one location, while the drones are flown around a set course in a warehouse. With NVIDIA’s tech this could just be upscaled to bigger vehicles.

Of course there a far more practical and serious applications for the technology. Being able to transport yourself inside an autonomous vehicle or robot that can reach dangerous areas or to help save a life. Whatever happens, VRFocus will always keep you updated on the latest use cases for VR.

NVIDIA Unveil the Quadro GV100 for Pro Visualisation

It’s the second day of NVIDIA’s GPU Technology Conference (GTC) 2018 and CEO Jensen Huang has taken to the stage for the main event, his annual keynote address. With Huang detailing NVIDIA’s current and future plans he’s now unveiled the company’s latest, enterprise focused GPU, the Quadro GV 100.

Nvidia GTC2018 image1

The Quadro GV100 is the latest graphics card aimed at the professional market, designed for companies needing powerful visualisation hardware for various tasks including virtual reality (VR), augmented reality (AR), mixed reality (MR) and other design based outlets.

Based on NVIDIA’s Volta architecture, the Quadro GV100 is the largest in the series that the company has made, with 10,240 CUDA Cores, 236 TFLOPS Tensor Cores and 32GB HMB2 memory built in (scalable to 64GB using two-way NVLink technology).  NVIDIA has confirmed that the Quadro GV100 features 7.4 teraflops of double-precision performance, 14.8 teraflops of single-precision and 118.5 teraflops when used for deep learning tasks.

The card has been designed to power NVIDIA’s real-time ray tracing technology (RTX) which was first unveiled during the Game Developers Conference (GDC) 2018 last week, during Epic Games’ State of Unreal keynote. Ray tracing is a lighting technology that will allow developers to create even more realistic scenes normally taking hours of rendering. With RTX NVIDIA has claimed all those rendering hours will be reduced. 

Unreal Engine 4 Reflections 02

Obviously the Quadro GV100 isn’t designed for the average consumer. While a price for the graphics card hasn’t been revealed, its predecessor the GP100 cost £6,499.00 GBP. The Quadro GV100 will be available through NVIDIA’s website after the keynote. Additionally, Dell, HP and Lenovo will be hardware partners, shipping workstations with Quadro GV100 next month.

VRFocus is at GTC 2018 all week, bringing you the latest news, announcements and liveblogs regarding VR, AR and immersive technologies, so stay tuned for more updates.

NVIDIA Reveal VRWorks 360 Degree Video 1.5 Release, Partners Comment On SDK’s Inclusion

When it comes to telling a story there’s nothing quite like an immersive experience. Whether it’s in virtual reality (VR) or augmented reality (AR) the different ways you can use the technology to craft a tale are pushing an author’s ability to show their world in new directions. This, naturally, includes 360 degree video. And we’ve seen a growing increase in the amount of productions being made, awards being presented and recognition being received within the filmmaking industry all linked to VR and AR – which almost expressly involves 360 degrees.

NVIDIA VRWorks - 360 VideoAs part of NVIDIA’s GPU Technology Conference (GTC), which will be taking place all this week in San Jose, California; the company has revealed as part of the proceedings a number of announcements relating to the VRWorks 360 Video SDK, the latest version of which (v1.5) was released today. Three firms relating to the creation of media, Z CAM, STRIVR and Pixvana all revealed their adoption of the SDK as part of their presentation, speaking to the NVIDIA blog.

“Because NVIDIA VRWorks 360 Video SDK shared the same API between Windows and Linux, it was super-fast and easy to integrate into our Linux cloud platform.” Explained Pixvana’s Sean Safreed, Product Director and Co-Founder of the company. Pixvana develops the SPIN Studio Platform which can be used to stitch together 360 degree footage. SPIN, naturally, integrates VRWorks 360 Video SDK. “The ability to access the VRWorks SDK through our powerful GPU-accelerated cloud backend simplifies the workflow and massively speeds the process from shot to review to final distribution, which our customers love.”

NVIDIA gave focus to Z CAM, the earliest adopter within the camera industry of the VRWorks 360 Video SDK. At GDC Z CAM has unveiled its V1 Professional VR Camera, able to record at 6K with 60 FPS in 360-degree stereo utilising ten cameras. As with those before it will be continuing support. Z CAM’s CEO explained the importance of including NVIDIA’s work.

“Integrating the VRWorks 360 Video SDK made it easy for us to enable live streaming of high-quality, 360-degree stereo video, and to support live streaming of both mono and stereo 360 VR, so our customers can really push the boundaries of live storytelling.”

You can see a demonstration of the Z CAM’s new product below, a video recorded at Times Square in New York City and put together using their Wonderstitch software with the NVIDIA SDK.

Lastly are VR production house STRIVR, last seen on VRFocus back in January as they worked with VR training programs to help athletes preparing for this year’s Winter Olympics in Pyeongchang, South Korea. The company’s Chief Technical Officer, Brian Meek, described how it’s using the SDK to put some additional speed into production of video for its immersive training platform.

“Integrating VRWorks 360 Video SDK accelerated the STRIVR stitching process from 15 fps to between 45 and 60 fps, a 3-4x performance gain,” explains Meek. “Which translates into much faster turnaround time from filming to delivery.”

There’s sure to be plenty of news from this year’s GTC event. Look out for more on VRFocus throughout the week.

Ready Player One Gets the NVIDIA Holodeck Treatment at GTC 2018

Today marks the start of NVIDIA’s GPU Technology Conference (GTC) 2018 event in San Jose, California, with the technology manufacturer ready to start four days of talks and sessions focused on virtual reality (VR), AI, graphics cards and a whole lot more. To get the ball rolling, and with Ready Player One due to launch this weekNVIDIA is treating attendees to a VR experience on the company’s Holodeck.

Ready Player One Hero

Teaming up with Warner Bros. and HTC Vive, NVIDIA Holodeck will use 3D assets from Ready Player One, helping transport players to the year 2045 and “Aech’s basement,” where, in an escape room-style experience, they join a quest in which solving one puzzle triggers the next. Teams will have to work together to complete the challenge within the allotted time to win.

“Combining physics with natural interactions, NVIDIA Holodeck creates a virtual world that looks and feels real to players, who can interact with virtual objects while exploring richly detailed scenes. It imports complex, detailed models consisting of tens of millions of polygons in real-time VR, making it the perfect environment to showcase the cinema-quality assets of Ready Player One,” states NVIDIA.

Holodeck was first unveiled during last years keynote address by founder and CEO, Jensen Huang. Designed as a VR collaboration platform that brings people together from around the world in ultra-realistic virtual experiences, Holodeck  has previously been demonstrated by Swedish supercar maker Christian Koenigsegg.

Then in October 2017 NVIDIA launched the application process for product designers, application developers, architects or any other type of 3D content creator to sign up for early access.

GTC 2018 is likely to feature much more of Holodeck as the week progresses, with NVIDIA having previously mentioned plans to address deep learning techniques in virtual environments, including capabilities for AI-based training, simulation and content creation using the platform. VRFocus will be at GTC 2018 to bring you all the latest news and announcements as they happen.