Microsoft Researchers Explore Eye Tracking Uses For VR

Microsoft researchers released some new work related to eye-tracking this week.

The 13-page paper titled Mise-Unseen: Using Eye-Tracking to Hide Virtual Reality Scene Changes in Plain Sight is written by Microsoft intern and University of Potsdam PhD student Sebastian Marwecki, along with Microsoft researchers Andy WilsonEyal OfekMar Gonzalez Franco, and Christian Holz.

Accompanied by a video, the paper explains how eye-tracking allows for a scene to change by keeping track of where the eyes are pointed and changing things only in the peripheral vision.

Eye-tracking for VR is not a new idea and some headsets already have the technology built in. Work is underway, however, at all the major companies to more accurately and reliably track eye movements because next generation headsets may be able to use the information in various ways. The paper from Microsoft researchers helps explain some of those potential applications. For example, objects in a scene can be changed to help a user solve a puzzle. Likewise, gaze can be used to predict which of several options a user might be inclined to pick. The video above demonstrates this with two weapon choices and a single physical prop. Using the gaze of the user, the software determines which weapon the user is likely to pick, and then moves that virtual weapon to line up with the physical prop.

Perhaps most interestingly, the report covers the application of foveated rendering to improve rendering efficiency while also exploring the idea of gaze-tracking as a method for reducing sickness induced by simulated locomotion. The paper compares a popular approach used in VR software design for reducing sickness that, during periods of fast simulated movement, reduces the field of view of what you can see into a kind tunnel vision. This approach was compared with another wherein “the participant had a full field of view, but motion outside the fovea was removed by reducing the update rate to 1Hz. We cross-fade between frames and add motion blur to hide the reduced update rate.” According to the researchers, “most participants preferred” this condition with one participant reporting “there is no motion sickness.”

Overall, the findings are very interesting. With research like this it is easy to see eye-tracking technology is likely to be a key part of truly next generation VR headsets.

What are your thoughts on the findings? Let us know in the comments.

The post Microsoft Researchers Explore Eye Tracking Uses For VR appeared first on UploadVR.

Facebook Publishes New Research on Hyper-realistic Virtual Avatars

Facebook Reality Labs, the company’s AR/VR R&D group, published detailed research on a method for hyper-realistic real-time virtual avatars, expanding on prior work which the company calls ‘Codec Avatars’.

Facebook Reality Labs has created a system capable of animating virtual avatars in real-time with unprecedented fidelity from compact hardware. From just three standard cameras inside the headset, which capture the user’s eyes and mouth, the system is able to represent the nuances of a specific individual’s complex face gestures more accurately than previous methods.

More so than just sticking cameras on to a headset, the thrust of the research is the technical magic behind using the incoming images to drive a virtual representation of the user.

The solution relies heavily on machine learning and computer vision. “Our system runs live in real-time and it works for a wide range of expressions, including puffed-in cheeks, biting lips, moving tongues, and details like wrinkles that are hard to be precisely animated for previous methods,” says one of the authors.

Facebook Reality Labs published a technical video summary of the work to coincide with SIGGRAPH 2019:

The group also published their full research paper, which dives even deeper into the methodology and math behind the system. The work, ‘VR Facial Animation via Multiview Image Translation’, was published in ACM Transactions on Graphics, which is self-described as the “foremost peer-reviewed journal in graphics.” The paper is authored by Shih-En Wei, Jason Saragih, Tomas Simon, Adam W. Harley, Stephen Lombardi, Michal Perdoch, Alexander Hypes, Dawei Wang, Hernan Badino, Yaser Sheikh.

(a) The ‘Training’ headset, with nine cameras. (b) The ‘Tracking’ headset with three cameras; camera positions shared with the Training headset circled in red. | Image courtesy Facebook Reality Labs

The paper explains how the project involved the creation of two separate experimental headsets, a ‘Training’ headset and a ‘Tracking’ headset.

The Training headset is bulkier and uses nine cameras which allow it to capture a wider range of views of the subject’s face and eyes. Doing so makes easier the task of finding the ‘correspondence’ between the input images and a previously captured digital scan of the user (deciding which parts of the input images represent which parts of the avatar). The paper says that this process is “automatically found through self-supervised multiview image translation, which does not require manual annotation or one-to-one correspondence between domains.”

Once correspondence is established, the more compact ‘Tracking’ headset can be used. The alignment of its three cameras mirror three of the nine cameras on the ‘Training’ headset; the views of these three cameras are better understood thanks to the data collected from the ‘Training’ headset, which allows the input to accurately drive animations of the avatar.

The paper focuses heavily on the accuracy of the system. Prior methods create lifelike output, but the accuracy of the user’s actual face compared to the representation breaks down in key areas, especially with extreme expressions and the relationship between what the eyes are doing and what the mouth is doing.

Image courtesy Facebook Reality Labs

The work is especially impressive when you take a step back at what’s actually happening here: for a user whose face is largely obscured by a headset, extremely close camera shots are being used to accurately rebuild an unobscured view of the face.

As impressive as it is, the approach still has major hurdles preventing mainstream adoption. The reliance on both a detailed preliminary scan of the user and the initial need to use the ‘Training’ headset would necessitate something along the lines of ‘scanning centers’ where users could go to have their avatar scanned and trained (might as well capture a custom HRTF while you’re at it!). Until VR is a significant part of the way society communicates, it seems unlikely that such centers would be viable. However, advanced sensing technologies and continued improvements in automatic correspondence building atop this work could eventually lead to a viable in-home process.

The post Facebook Publishes New Research on Hyper-realistic Virtual Avatars appeared first on Road to VR.

A Closer Look At What Microsoft’s SeeingVR Offers The Visually Impaired

microsoft visually impaired VR research

Microsoft Reseach, in partnership with Cornell University, developed a range of techniques to make virtual reality accessible to the visually impaired.

VR is a heavily visual medium. Most VR apps and games assume the user has full visual ability. But just like in real life, some users in virtual environments are visually impaired. In the real world a range of measures are taken to accommodate such users, but in VR no effort has yet been made.

The researchers came up with 14 specific tools to tackle this problem. They are delivered as engine plugins for Unity. Of these tools, 9 do not require specific developer effort. For the remaining 5, the developer of each app needs to undertake some effort to support them.

It’s estimated that around 200 million people worldwide are visually impaired. If Microsoft plans to release these tools as engine plugins, it could make a huge difference in these user’s ability to use virtual reality. For VR to succeed as a medium it must accommodate everyone.

Automatic Tools

Magnification Lens: Mimicking the most common Windows OS visual accessibility tool, the magnification lens magnifies around half of the user’s field of view by 10x.

Bifocal Lens: Much the same as bifocal glasses in the real world, this tool adds a smaller but persistent magnification near the bottom of the user’s vision. This allows for constant spatial awareness while still enabling reading at a distance.

Brightness Lens: Some people have different brightness sensitivity, so this tool allows the user to adjust the brightness of the image all the way from 50% to 500% to make out details.

Contrast Lens: Similar to the Brightness Lens, this tool lets the user modify the contrast so that low contrast details can be made out. It is an adjustable scale from 1 to 10.

Edge Enhancement: A more sophisticated way to achieve the goal of the Contrast Lens, this tool detects visible edges based on depth and outlines them.

Peripheral Remapping: This tool is for people without peripheral vision. It uses the same edge detection technique as Edge Enhancement but shows the edges as an overlay in the center of the user’s field of view, giving them spatial awareness.

Text Augmentation: This tool automatically changes all text to white or black (whichever is most appropriate) and changes the font to Arial. The researchers claim Arial is proven to be more readable. The user can also change the text to bold or increase the size.

Text to Speech: This tool gives the user a virtual laser pointer. Whichever text they point at will be read aloud using speech synthesis technology.

Depth Measurement: For people with depth perception issues, this tool adds a ball to the end of the laser pointer, which lets them easily see the distance they are pointing to.

Tools Requiring Developer Effort

Object Recognition: Just like “alt text” on images on the 2D web, this tool reads aloud the description of virtual objects the user is pointing at (using speech synthesis).

Highlight: Users with vision issues may struggle to find the relevant objects in a game scene. By simply highlighting them in the same way as Edge Enhancement, this tool lets those users find the way in games.

Guideline: This tool works alongside Highlight. When the user isn’t looking at the relevant objects, Guideline draws arrows pointing towards them.

Recoloring: For users with very serious vision problems, this tool recolors the entire scene to simple colors.

Tagged with: , , ,

The post A Closer Look At What Microsoft’s SeeingVR Offers The Visually Impaired appeared first on UploadVR.

Oculus Research Becomes ‘Facebook Reality Labs’, Creating “further and faster” Leaps in AR/VR

Oculus today announced it’s rebranding Oculus Research, the company’s R&D lab, to the newly created Facebook Reality Labs (FRL). The shift, the company says, better addresses the increasingly important role of research and development in AR/VR while emphasizing collaboration with Facebook’s other skunkworks, something Oculus Chief Scientist Michael Abrash says is allowing for “further and faster” development of leading-edge AR/VR tech.

The lab’s focus on the future hasn’t changed, the company says, although the new name reflects a new role the R&D group plays “not only at Oculus, but also across Facebook’s AR/VR organization, which includes Building 8, Camera, and Social VR,” an Oculus spokesperson told Road to VR.

Facebook’s Building 8 specializes in researching and productizing advances in AR, VR, AI and more.

The company announced the change via a Facebook post by Oculus Chief Scientist Michael Abrash.

Image courtesy Oculus

Abrash famously offered up some bold predictions at Oculus Connect 3 back in 2016, which outlined a pretty specific direction for AR/VR on its five-year march forward, including the prediction that VR headsets would double the number of current pixels per degree to 30, push the resolution to around 4,000 × 4,000 pixels per display, and widen the field of view to 140 degrees. Both Oculus Rift and HTC Vive currently offer 15 pixels per degree, a resolution of 1,080 × 1,200 pixels per display, and a field of view of around 110 degrees.

Abrash presciently announced then that the current tech’s fixed depth of focus would also likely become variable within 5 years. Many of these technologies, including varifocal displays and 140 degree field of view, are incorporated in Oculus’ Half Dome prototype, which was revealed last week at Facebook’s F8 developer conference.

Image courtesy Facebook

“We are just a year and a half along now [after Connect 3], and I would say those predictions are holding up well,” Abrash says. “In fact, the truth is that I probably undershot, thanks to Facebook’s growing investment in FRL, which allows us to push the boundaries of what it takes to build great experiences further and faster. We are helping Oculus and all of Facebook create trailblazing AR and VR experiences, from what’s most affordable to leading edge.”

SEE ALSO
Oculus on Half Dome Prototype: 'don't expect to see everything in a product anytime soon'

Abrash says FRL “brings together a world-class R&D team of researchers, developers, and engineers with the shared goal of developing AR and VR across the spectrum,” and that while there are plenty of issues with VR and AR at present, “they’re all solvable, and they are going to get solved.”

With increasing investment, the company will no doubt continue its mission to push forward a number of related fields including optics, displays, audio sensing, computer vision, scene reconstruction, graphics animation, UX, haptics, machine learning, software and hardware engineering, social interactions, material sciences and perceptual psychology—all of it crucial to the upcoming generation of future VR/AR devices.

The post Oculus Research Becomes ‘Facebook Reality Labs’, Creating “further and faster” Leaps in AR/VR appeared first on Road to VR.

Oculus on Half Dome Prototype: ‘don’t expect to see everything in a product anytime soon’

At Facebook’s F8 developer conference Oculus revealed a glimpse at an intriguing new headset prototype dubbed ‘Half Dome’. Including a 140 degree field of view, varifocal displays, and what appears to be eye-tracking, the prototype is a tantalizing peek at the company’s research and what may lay ahead of us—just don’t expect it everything we saw “anytime soon,” says Oculus co-founder and Head of Rift Nate Mitchell.

Besides the fact that Oculus is undoubtedly working on a second flagship PC VR headset, nothing is known about it thus far. And derailing the hypetrain somewhat, Mitchell took to Reddit to address comments reeling from the prospect of Half Dome’s technology making its way into a potential Rift 2.

Image courtesy Facebook

“Seriously, a varifocal display?” writes Reddit user ‘DarthBuzzard’. “I honestly expected that to be CV3 and CV2 would have simulated depth of focus rather than full depth of focus. Looks like things really are moving faster than expected!”

In response, Mitchell had this to say:

“[Maria Fernandez Guajardo, Head of Product Management, Core Tech at Oculus] covered a bunch of areas of long term research for us. This is just a peek into some feature prototypes we’ve been working on. However, don’t expect to see all of these technologies in a product anytime soon.”

While this doesn’t entirely negate a prospective Rift 2 with varifocal displays, 140 degree field of view, and eye-tracking (or any combination of the three), being able to productize all of these these things into a single headset will likely take time to get right.

SEE ALSO
Oculus Claims Breakthrough in Hand-tracking Accuracy

VR headsets are ideally robust devices built to withstand the daily abuses from their owners, and varifocal displays, which physically move to accommodate a wider range of focus, introduce a number of moving parts that are constantly moving in tandem with the user’s gaze. These parts undoubtedly also complicate manufacturing and increase the overall cost of the device too.

 

Eye-tracking is however something that is both physically robust, and probably much cheaper to make for Oculus considering last year’s acquisition of Eye Tribe, a Denmark-based eye-tracking startup which advertised “the world’s most affordable eye tracker” as far back as 2013.

As for the wider field of view: it’s still uncertain if the varifocal displays were a key technology in obtaining the 140 degree FOV, although Fernandez Guajardo stated at F8 that the company’s “continued innovation in lenses has allowed [Oculus] to pack all of this technology and still keep the Rift form-factor.” One of the images shown at F8 does show a much larger pair of supposed Fresnel lenses—so not a stark impossibility either.

Image courtesy Facebook

At GDC last year, Head of Oculus PC VR Brendan Iribe stated that Rift will remain the company’s flagship VR headset for “at least the next two years.” Mincing Iribe’s statement somewhat, that puts a potential Rift 2 launching sometime in 2019 at its earliest.

We hope to see more at Oculus Connect 5 which should be sometime in Fall 2018.

The post Oculus on Half Dome Prototype: ‘don’t expect to see everything in a product anytime soon’ appeared first on Road to VR.

Oculus Reveals 140 Degree VR Headset Prototype with Varifocal Displays

Oculus today at F8 overviewed some of the latest VR technology that they’d been working on internally. Among the projects mentioned is the ‘Half Dome’ prototype, a Rift-like headset with a 140 degree field of view, varifocal displays, and what appears to be eye-tracking.

Maria Fernandez Guajardo, Head of Product Management, Core Tech at Oculus, revealed the Half Dome prototype after saying that her job is to help take the research that’s happening within the company and turn it into practical building blocks for future projects.

A Rift-like field of view compared to the Half Dome prototype. | Image courtesy Facebook

Guajardo said that the Half Dome prototype manages to pack a 140 degree field of view and varifocal displays into a Rift-like form factor. The wide field of view appears to be thanks to new Fresnel lenses, and the appearance of eye-tracking technology on the headset may also play a role.

Two prototype headsets, apparently with eye-tracking. The left appears to be using Rift-like lenses while the right uses new lenses which are said to have a 140 degree field of view. | Image courtesy Facebook

While eye-tracking may benefit the field of view improvements, it’s almost certainly utilized primarily in the Half Dome prototype’s varifocal displays, which physically move back and forth to dynamically shift the focus of the optical system.

Image courtesy Facebook

Doing so allows for sharp imagery even with nearfield objects, a problem that Guajardo says plagues the consumer headsets of today. She said that much attention has been paid to making Half Dome’s display actuation system silent and otherwise imperceptible to the user.

In the last year, Oculus has detailed a range of research projects relating to varifocal technology.

Correction (5/2/18): An earlier version of this article stated that Guajardo was part of Facebook. While Oculus is owned by Facebook, Guajardo is actually part of Oculus. This has been corrected in the article above. Hat tip to Reddit user Heaney555 who pointed this out.

The post Oculus Reveals 140 Degree VR Headset Prototype with Varifocal Displays appeared first on Road to VR.

Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR

Researches from Stony Brook University, NVIDIA, and Adobe have devised a system which hides so-called ‘redirected walking’ techniques using saccades, natural eye movements which act like a momentary blindspot. Redirected walking changes the direction that a user is walking to create the illusion of moving through a larger virtual space than the physical space would allow.

Update (4/27/18): The researchers behind this work have reached out with the finished video presentation for the work, which has been included below.

Original Article (3/28/18): At NVIDIA’s GTC 2018 conference this week, researchers Anjul Patney and Qi Sun presented their saccade-driven redirected walking system for dynamic room-scale VR. Redirected walking uses novel techniques to steer users in VR away from real-world obstacles like walls, with the goal of creating the illusion of traversing a larger space than is actually available to the user.

There’s a number of ways to implement redirected walking, but the strengths of this saccade-driven method is that it’s hidden from the user, widely applicable to VR content, and dynamic, allowing the system to direct users away from objects newly introduced into the environment, and even moving objects, the researchers say.

The basic principle behind their work is an exploitation of a natural quirk of human vision—saccadic suppression—to hide small rotations to the virtual scene. Saccades are quick eye movements which happen when we move our gaze from one part of a scene to another. Instead of moving in a slow continuous motion from one gaze point to the next, our eyes quickly dart about, when not tracking a moving object or focused on a singular point, a process which takes tens of milliseconds.

An eye undertaking regular saccades

Saccadic suppression occurs during these movements, essentially rendering us blind for a brief moment until the eye reaches its new point of fixation. With precise eye-tracking technology from SMI and an HTC Vive headset, the researchers are able to detect and exploit that temporary blindness to hide a slight rotation of the scene from the user. As the user walks forward and looks around the scene, it is slowly rotated, just a few degrees per saccade, such that the user reflexively alters their walking direction in response to the new visual cues.

This method allows the system to steer users away from real-world walls, even when it seems like they’re walking in a straight line in the virtual world, creating the illusion that the the virtual space is significantly larger than the corresponding virtual space.

A VR backpack allows a user at GTC 2018 to move through the saccadic redirected walking demo without a tether. | Photo by Road to VR

The researchers have devised a GPU accelerated real-time path planning system, which dynamically adjusts the hidden scene rotation to redirect the user’s walking. Because the path planning routine operates in real-time, Patney and Sun say that it can account for objects newly introduced into the real world environment (like a chair), and can even be used to steer users clear of moving obstacles, like pets or potentially even other VR users inhabiting the same space.

The research is being shown off in a working demo this week at GTC 2018. An academic paper based on the work is expect to be published later this year.

The post Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR appeared first on Road to VR.

Researchers Electrically Stimulate Muscles in Haptic Designed for Hands-free AR Input

Researchers at The Human Computer Interaction Lab at Hasso-Plattner-Institut in Potsdam, Germany, published a video recently showing a novel solution to the problem of wearable haptics for augmented reality. Using a lightweight, mobile electrical muscle stimulation (EMS) device that provides low-voltage to arm muscles, the idea is to let AR headset-users stay hands-free, but also be able to experience force-feedback when interacting with virtual objects, and feel extra forces when touching physical objects in their environment too.

Using a HoloLens headset, researchers show their proposed solution in action, which is made up of a backpack, a laptop computer running Unity, a battery-powered EMS machine, electrode pads, and visual markers to better track hand gestures. The researchers say their system “adds physical forces while keeping the users’ hands free to interact unencumbered.”

image courtesy Hasso-Plattner-Institut

Both HoloLens and the upcoming Magic Leap One include a physical controller; HoloLens has a simple ‘clicker’ and ML One has a 6DoF controller. While both systems admittedly incorporate gestural recognition, there’s still no established way for AR headset users to ‘feel’ the world around them.

According to the paper, which is being presented at this year’s ACM CHI Conference in Montréal, the EMS-based system actuates the user’s wrists, biceps, triceps and shoulder muscles with a low-voltage to simulate a sort of ‘virtual pressure’. This perceived pressure can be activated when you interact with virtual objects such as buttons, and even physical objects like real-world dials and levels to create an extra sense of force on the user’s arms.


There are some trade-offs when using this sort of system though, making it somewhat less practical for long-term use as it’s configured now. Two of the biggest drawbacks: it requires precise electrode placement and per-user calibration before each use, and it can also cause muscle fatigue, which would render it less useful and probably less comfortable.

But maybe a little muscle stimulation can go a long way. The paper discusses using EMS sparingly, playing on the user’s keen sense for plausibility while in a physical (and not virtual) environment.

“In the case of [augmented reality], we observed users remarking how they enjoyed nuanced aspects of the EMS-enabled physics, for instance: “I can feel the couch is harder to move when it is stopped [due to our EMS-based static friction]”. As a recommendation for UX designers working in MR, we suggest aligning the “haptic-physics” with the expected physics as much as possible rather than resorting to exaggerations.

It’s an interesting step that could prove effective in a multi-pronged approach to adding haptics to AR wearables, the users of which would want to stay hands-free when going about their daily lives. Actuator-based gloves and vests have been a low-hanging fruit so far, and are quickly becoming a standard go-to for VR haptics, but still seem too much of a stretch for daily AR use. Force-feedback exoskeletons, which stop physical movements, are much bulkier and are even more of a stretch currently.

There’s no telling what the prevailing AR wearable will be in the future, but whatever it is, it’s going to have to be both light and useful—two aspects EMS seems to nail fairly well out of the gate.

The post Researchers Electrically Stimulate Muscles in Haptic Designed for Hands-free AR Input appeared first on Road to VR.

Oculus Research to Talk “Reactive Displays” for Next-gen AR/VR Visuals at DisplayWeek Keynote

Oculus Research’s director of computational imaging, Douglas Lanman, is scheduled to give a keynote presentation at SID DisplayWeek in May which will explore the concept of “reactive displays” and their role in unlocking “next-generation” visuals in AR and VR headsets.

Among three keynotes to be held during SID DisplayWeek 2018, Douglas Lanman, director of computational imaging at Oculus Research, will present his session titled Reactive Displays: Unlocking Next-Generation VR/AR Visuals with Eye Tracking on Tuesday, May 22nd.

The synopsis of the presentation reveals that Lanman will focus on eye-tracking technology and its potential for pushing VR and AR displays to the next level:

As personal viewing devices, head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays occupying a shared environment. Viewing optics, display components, and sensing elements may all be tuned for a single user. It is the latter element that helps differentiate from the past, with individualized eye tracking playing an important role in unlocking higher resolutions, wider fields of view, and more comfortable visuals than past displays. This talk will explore the “reactive display” concept and how it may impact VR/AR devices in the coming years.

The first generation of VR headsets have made it clear that, while VR is already quite immersive, there’s a long way to go toward the goal of getting the visual fidelity of the virtual world to match human visual capabilities. Simply packing displays with more pixels and rendering higher resolution imagery is a straightforward approach but perhaps not as easy as it may seem.

An eye-tracking addon for the HTC Vive. IR LEDs (seen surrounding the lens) illuminate the pupil while a camera watches for movement.

Over the last few years, a combination of eye-tracking and foveated rendering technology has been proposed as a smarter pathway to greater visual fidelity in VR. Precise eye-tracking technology could understand exactly where users are looking, allowing for foveated rendering—rendering in maximum fidelity only at the small area in the center of your vision which sees in high detail, while keeping computational load in check by reducing the rendering quality in your less detailed peripheral vision. Hardware foveated display technology could even move the most pixel-dense part of the display to the center of the user’s gaze, potentially reducing the challenge (and cost) of cramming more and more pixels onto a single panel.

The same eye-tracking approach could be used to improve various lens distortions, no matter which direction the user is looking, which could improve visual fidelity and potentially make larger fields of view more practical.

SEE ALSO
Oculus Research Reveals New Multi-focal Display Tech

Lanman’s concept of a “reactive displays” sounds, at first blush, a lot like the approach that NVIDIA Research is calling “computation displays,” which they detailed in depth in a recent guest article on Road to VR. The idea is to make the display system itself, in a way, aware of the state of the viewer, and to move key parts of display processing to the headset itself in order to achieve the highest quality and lowest latency.

Despite the benefits of eye-tracking and foveated rendering, and some very compelling demonstrations, it still remains an area of active research, with no commercially available VR headsets yet offering a seamless hardware/software solution. So it will be interesting to hear Lanman’s assessment of the state of these technologies and their applicability to AR and VR headsets of the future.

The post Oculus Research to Talk “Reactive Displays” for Next-gen AR/VR Visuals at DisplayWeek Keynote appeared first on Road to VR.

Stanford VR Lab Founder’s Academic Journey with VR, and new Book ‘Experience on Demand’

jeremy-bailensonJeremy Bailenson founding director of Stanford University’s Virtual Human Interaction Lab, and his latest book Experience on Demand traces his academic journey through virtual reality. It’s an intellectual memoir that focuses on his personal work in VR, and the insights that VR provides into human communication dynamics, as well as the impact of VR on our identity, empathy, education, medicine, and our ability to understand complex issues such as global warming and our impact on the environment.
LISTEN TO THE VOICES OF VR PODCAST

I had a chance to sit down with Bailenson to talk about his journey into VR, the major insights that VR has provided into human communication, and how STRIVR, the company he co-founded, is moving from training elite quarterbacks in the NFL to landing major corporate training contracts including training Wal-Mart employees. STRIVR is gathering one of the most robust data sets for using VR for education and training, which is enabling them to build statistical models to make connections between unconscious biometric gaze data and the process of learning, Bailenson says.

We also talk about how AI and machine learning will help build powerful models for biometric data, but also some of the privacy implications of this data as well as what we know and don’t know when it comes to the risks and dangers of virtual reality technology.


Support Voices of VR

Music: Fatality & Summer Trip

The post Stanford VR Lab Founder’s Academic Journey with VR, and new Book ‘Experience on Demand’ appeared first on Road to VR.