Advanced Brain Monitoring EEG Metrics and Experimental VR Treatments for Neurodegenerative Diseases

chris-berkaAdvanced Brain Monitoring is a 17-year old neurotechnology company that has been able to extract a lot of really useful information from EEG data. They’ve developed specific EEG Metrics for drowsiness, inducing flow states, engagement, stress, emotion, and empathy as well as biomarkers for different types of cognitive impairment. They’ve also developed a brain-computer interface that can be integrated with a VR headset, which has allowed them to do a couple of VR medical applications for PTSD exposure therapy as well as some experimental VR treatments for neurodegenerative diseases like Dementia.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Advanced Brain Monitoring’s CEO and co-founder Chris Berka at the Experiential Technology conference where we talked about their different neurotechnology applications ranging from medical treatments, cognitive enhancement, accelerated learning, and performance training processes that guide athletes into optimal physiological flow states.

Advanced Brain Monitoring operates within the context of a medical application with an institutional review board and HIPAA-mandated privacy protocols, and so we also talked about the ethical implications of capturing and storing EEG data within a consumer context. She says, “That’s a huge challenge, and I don’t think that all of the relevant players and stakeholders have completely thought through that issue.”

They’ve developed a portfolio of biomarkers for neurodegenerative diseases including Alzheimer’s Disease, Huntington’s Disease, Mild Cognitive Impairment, Frontal Temporal Dementia, Lewy Body Dementia, Parkinison’s Disease. They’ve shown that it’s possible to detect a number of medical conditions based upon EEG data, which raises additional ethical questions for any future consumer-based VR company who records and stores EEG data. What is their disclosure or privacy-protection obligations if they are able to potentially detect a number of different medical conditions before you’re aware of them?

SEE ALSO
Privacy in VR Is Complicated and It'll Take the Entire VR Community to Figure It Out

The convergence of EEG and VR is still in the DIY and experimental phases with custom integrated B2B solutions that are coming soon from companies like Mindmaze, but it’s still pretty early for consumer-based applications for EEG and VR. Any integration would have to require piecing together hardware options from companies like Advanced Brain Monitoring or the OpenBCI project, but then you’d also likely need to roll your own custom applications.

There are a lot of exciting biofeedback-driven mindfulness applications or accelerated learning and training applications that will start to become more available, but that some of the first EEG and VR integrations will likely be within the context of medical applications like neurorehabilitation, exposure therapy, and potential treatments for neurodegenerative diseases.


Support Voices of VR

Music: Fatality & Summer Trip

The post Advanced Brain Monitoring EEG Metrics and Experimental VR Treatments for Neurodegenerative Diseases appeared first on Road to VR.

Biometric Data Streams and the Unknown Ethical Threshold of Predicting & Controlling Behavior

John-BurkhardtI recently attended the Experiential Technology Conference where there were a lot of companies looking at how to use biometric data to get insights into health & wellness, education for cognitive enhancement, market research, and more. Over the next couple of years, virtual reality platforms will be integrating more and more of these biometric data streams, and I wanted to learn about what kind of insights can be extrapolated from them. I talked with behavioral neuroscientist John Burkhardt from iMotions, one of the leading biometric platforms, about what metrics that can be captured from eye tracking, facial tracking, galvanic skin response, EEG, EMG, and ECG.

LISTEN TO THE VOICES OF VR PODCAST

I also talked to Burkhardt about some of the ethical implications of integrating these biometric data streams within an entertainment and consumer VR context. He says that the fields of advertising and brainwashing often borrow from each other’s research, and he is specifically concerned about whether or not it’ll be possible to hack our fixed action patterns, which are essentially stimulus response behaviors that could be operating below our conscious awareness.

Most of the work that iMotions does is within the context of controlled research and explicit consent of participants, but what happens when entire virtual environments can be controlled and manipulated by companies who know more about your unconscious behaviors than you do?

iMotions-Biometric-Research-PlatformBurkhardt says that there is a behavior that he would consider to be unethical for how this biometric data are captured and used, but the problem is that no one really knows where that threshold lies. We might be able recognize it after it’s already crossed, but it’s hard to predict what that looks like or when it might happen. We’re not there yet, but the potential is clearly there. An open question is whether or not the VR community is going to take a reactive or proactive approach to it.

Burkhardt also says that these types of issues tend to be resolved by implicit collective consensus in the sense that we’re already tolerating a lot of the cost/benefit tradeoffs of using modern technology. He says that it’s just a matter of time before someone creates a way to formulate a unique biometric fingerprint based upon aggregating these different data streams, and it’s an open question as to who should own and control that key.

The insights from biometric data streams could also evolve to the point where big data companies who may be capturing it could be able predict our behavior, but potentially even be able to directly manipulate and control it. He also says that it raises deeper philosophical questions like if someone can take away our free will with the right stimuli, then do we even have it to begin with?

SEE ALSO
Facebook Study Finds Introverts Feel More Comfortable with VR Social Interaction

As I covered in my previous podcast with Jim Preston, it’s easy to jump to utopian or dystopian outcomes regarding privacy in VR, but it’s more likely to fall somewhere in between, as it is complicated and complex. There are lots of potential new forms of self awareness of being able to observe our autonomic and unconscious internal states of being as well as changing the depth and fidelity of social interactions. But there also risks for this type of data being used to shape and control our behaviors in ways that cross an ethical threshold. It’s something that no individual person or company can figure out, but is something that is going to require the entire VR community.


Support Voices of VR

Music: Fatality & Summer Trip

The post Biometric Data Streams and the Unknown Ethical Threshold of Predicting & Controlling Behavior appeared first on Road to VR.

Privacy in VR Is Complicated and It’ll Take the Entire VR Community to Figure It Out

jim-PrestonWhen I was at the GDC VR Mixer, Jim Preston struck up a conversation about his concerns about privacy in VR. He works at FOVE which is making a VR headset with eye-tracking, but wanted to speak to me on his own behalf about some of the deeper philosophical questions and conceptual frameworks around the types of intimate data that will become available to VR headsets. As more and more biometric data streams are integrated into VR there a lot of complicated and complex ethical questions that he thinks will take the entire VR community needs to figure out.

LISTEN TO THE VOICES OF VR PODCAST

Preston says that VR is challenging the long-standing enlightenment model of mind-body dualism, and that VR is able to do a sort of “redirected thinking” in being able completely control all aspects of someone’s else’s reality. This is a lot of power to put into the hands of performance-based marketing companies who have an extraordinary amount of data about our private lives; he has concerns that this data could start to be used to drive consumer behaviors in unconscious ways.

The technological roadmap for VR includes integrations with new biometric data streams including eye tracking, facial tracking, galvanic skin response, sensing of emotional states, our voices interactions, and perhaps even EEG brainwave data. This data has typically had tight privacy controls either within the context of medical applications or market research that requires explicit consent, but it’s being captured within the context of an attention-driven consumer market where there many other vectors of private data that have been collected and connected to your personal identity.

Here are some of open questions around the future of privacy in VR:

  • Do we need to evolve the business models in order to sustain VR content creation in the long-term?
  • If not then what are the tradeoffs of privacy in using the existing ad-based revenue streams that are based upon a system of privatized surveillance that we’ve consented to over time?
  • Should biometric data be classified as medical information and protected under HIPPA protections?
  • What is a conceptual framework for what data should be private and what should be public?
  • What type of transparency and controls should users expect from companies?
  • Should companies be getting explicit consent for the type of biometric data that they to capture, store, and tie back to our personal identities?
  • If companies are able to diagnose medical conditions from these new biometric indicators, then what is their ethical responsibility of reporting this to users?

Preston has a nuanced view of what VR is going to enable in that he thinks that it’s not going to be either a total dystopian or utopian future, but that our future is going to be complicated and complex. Much like chess teams of humans & AI are able to beat any other AI program, this type of cooperation between humans and machines are going to enable all sorts of new amazing capabilities while also introducing new challenging problems.

The future integration of biometric data into immersive technologies will being an array of complicated and complex questions that go beyond what any single company or individual can figure out, but Preston says that this is something that the VR community as a collective should talk about and attempt to answer some of these open questions.

I’d like to keep this conversation going too; I’ll soon be featuring some more information from biometric experts from the Experiential Technology Conference on the Voices of VR Podcast as well as an interview with Oculus’ Nate Mitchell.

For my previous coverage on privacy in VR, be sure to not miss Sarah Downey’s take on privacy in VR and the relationship between the 1st and 4th Amendment, as well as Tobii Eye tracking’s recommendation for explicit consent for recording eye tracking data, HTC’s Dan O’Brien, and the following two interviews with Google with some open questions about Google Earth VR & Tilt Brush, as well as my interview with Linden Lab’s Ebbe Altberg.


Support Voices of VR

Music: Fatality & Summer Trip

The post Privacy in VR Is Complicated and It’ll Take the Entire VR Community to Figure It Out appeared first on Road to VR.

Embodied Cognition Experiments with EleVR’s Math Museums & Hyperbolic Space

I believe that the principle of Embodied Cognition is probably one of the most significant and important concepts to understand about virtual reality. Cognitive science researchers have been connecting the dots the importance of our bodies when it comes to perception, the subjective construction of reality, and how we process and think about information. We use our entire body and surrounding environment in our cognitive processes, and virtual reality is bringing our entire bodies into computing in a way that takes full advantage of the insights coming from embodied cognition research.

LISTEN TO THE VOICES OF VR PODCAST

EleVR is a VR research collective that has declared 2017 as the “Year of the Body.” “Mathemusician” and virtual reality philosopher Vi Hart was a self-proclaimed ‘body skeptic’ seeing it as an inconvenience to take care of in the pursuit of higher forms of beauty with math and music, but after some preliminary experiments with embodied visualizations of physics she started to have a direct experience of the power of Embodied Knowledge.

I had a chance to catch up with EleVR’s Vi Hart and M Eifler to hear about their VR experiments and research into embodied cognition from creating interactive math museums built around 3D Venn Diagrams, visualizing hyperbolic space, and exploring the boundaries of container schemas and metaphors for understanding the concept of home and a place to rest.

Venn Diagram Museum

Hyperbolic Space in VR

Real Virtual Physics

Check out my previous episodes about Embodied Cognition:


Support Voices of VR

Music: Fatality & Summer Trip

The post Embodied Cognition Experiments with EleVR’s Math Museums & Hyperbolic Space appeared first on Road to VR.

Tobii Recommends Explicit Consent for Recording Eye Tracking Data

Johan-HellqvistThe eye tracking company Tobii had some VR demos that they were showing on the GDC Expo Hall floor as well as within Valve’s booth. They were primarily focusing on the new user interaction paradigms that are made available by using eye gazing to select specific objects, direct action, but also locomotion determined by eye gaze. I had a chance to catch up with Johan Hellqvist, VP products and integrations at Tobii, where we discussed some of the eye tracking applications being demoed. We also had a deeper discussion about what type of eye tracking data should be recorded and the consent that application developers should secure before capturing and storing it.

LISTEN TO THE VOICES OF VR PODCAST

One potential application that Hellqvist suggested was amplifying someone’s eye dilation in a social VR context as a way of broadcasting engagement and interest. He said that there isn’t explicit science to connect dilation with someone’s feelings, but this example brought up an interesting point about what type of data from an eye tracker should or should not be shared or recorded.

Hellqvist says that from Tobii’s perspective that application developers should get explicit consent about any type of eye tracking data that they want to capture and store. He says, “From Tobii’s side, we should be really, really cautious about using eye tracking data to spread around. We separate using eye tracking data for interaction… it’s important for the user to know that’s just being consumed in the device and it’s not being sent [and stored]. But if they want to send it, then there should be user acceptance.”

Hellqvist says our eye gaze is semi-conscious data that we have limited control over, and that this is something that will ultimately be up to each application developer as to what to do with that data. Tobii has a separate part of their business that does market research with eye tracking data, but he cautions that using eye tracking within consumer applications is a completely different context than market research that should require explicit consent.

SEE ALSO
Watch: Tobii Reveals Vive VR Eye Tracking Examples, Including 'Rec Room'

Hellqvist says, “It’s important to realize that when you do consumer equipment and consumer programs that the consumer knows that his or her gaze information is kept under control. So we really want from Tobii’s side, if you use the gaze for interaction then you don’t need the user’s approval, but then it needs to be kept on the device so it’s not getting sent away. But it should be possible that if the user wants to use their data for more things, then that’s something that Tobii is working on in parallel.”

Tobii will be actively working with the OpenXR standardization initiative to see if it makes sense to put some of these user consent flags within the OpenXR API. In talking with other representatives from OpenXR about privacy I got the sense that the OpenXR APIs will be a lot lower level than these types of application-specific requirements. So we’ll have to wait for OpenXR’s next update in the next 6-12 months as to whether or not Tobii was able to formalize any type of privacy protocols and controls within the OpenXR standard.

SEE ALSO
Valve Talks 'OpenXR', the Newly Revealed Branding for Khronos Group's Industry-backed VR Standard

Overall, Tobii’s and SMI VR demos that I saw at GDC proved to me that there are a lot of really compelling social presence, user interface, and rendering applications of eye tracking. However, there are still a lot of open questions around the intimate data that will be available to application developers and the privacy and consent protocols that will inform users and provide them with some level of transparency and control. It’s an important topic, and I’m glad that Tobii is leading an effort to bring some more awareness to this issue within the OpenXR standardization process.


Support Voices of VR

Music: Fatality & Summer Trip

The post Tobii Recommends Explicit Consent for Recording Eye Tracking Data appeared first on Road to VR.

Deepening Social Presence with SMI Eye Tracking

Christian-VillwockAt GDC this year, SensoMotoric Instruments (SMI) showed a couple of new eye tracking demos at Valve’s booth. They added eye tracking to avatars in the social VR experiences of Pluto VR and Rec Room, which provided an amazing boost to the social presence within these experience.

There are so many subtle body language cues that are communicated non-verbally through the someone else’s eye contact, gaze position, or even blinking. Since it’s difficult to see your own eye movement due to saccades, it’s best to experience eye tracking in a social VR context. Without having a recording of your eyes in social VR, you have to rely upon looking at a virtual mirror as you look to the extremes of your periphery, observing your vestibulo–ocular reflex as your eyes lock gaze while you turn your head, or winking at yourself.

eye-tracking

I had a chance to catch up with SMI’s Head of the OEM business Christian Villwock at GDC to talk about the social presence multiplier of eye tracking, the anatomy of the eye, and some of the 2x performance boosts they’re seeing with foveated rendering on NVIDIA GPUs.
Continue reading "Deepening Social Presence with SMI Eye Tracking"

‘Bigscreen’ Social Computing Space Metrics Show Big Value for VR Power Users

darshan-shankarBigscreen VR announced that they raised $3 million dollars for their “social utility” VR application. Bigscreen gives you access to your computer screen in VR, which is a deceptively simple idea but one that is unlocking new ways of working on your computer and enabling collaborative social environments that range from virtual 2D video game LAN parties to productive work meetings.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with founder Darshan Shankar at Oculus Connect 3 last October to talk about his founding story, and how he’s designed Bigscreen with privacy in mind through encrypted peer-to-peer networking technology that he developed. It’s a formula that seems to be working since he reports that “power users spend 20–30 hours each week in Bigscreen,” making it what Shankar calls, “one of the most widely used ‘killer apps’ in the industry.”

Those are astounding numbers for any social VR application, and the key to Bigscreen VR’s success is that they’ve been providing a more immersive and social experience of 2D content ranging from games to movies, and pretty much anything else you can do on your home computer.

The latest release of Bigscreen enables you to have up to three monitors in VR, which could provide an even better experience of working on your computer than in real life. You can stream Netflix or YouTube on a giant movie screen while playing a video game, designing an electrical circuit, browsing Reddit, or creating a 3D model in Maya. In Bigscreen, you can basically do anything that you can do on your computer screen, but in VR.

bigscreen-vrThe limited resolution of today’s headsets for comfortably reading text is the biggest constraint for now, but there are plenty of other tasks that people have found are more enjoyable in VR than in real life. It’s not just the immersive nature, improved focus, and unlocking the spatial thinking potential of your brain, but in Bigscreen you can do it with friends.

Adding a social dimension to computing in a private way is one of the keys to Bigscreen’s success. You can use Bigscreen by yourself without anyone else; you can create a private room using peer-to-peer technology such that what you’re actually doing in Bigscreen isn’t even being passed through any servers on Bigscreen’s side. And if you want to have a public cafe experience and connect with hardcore VR enthusiasts from around the world, then create a public room and see who comes through. It’s a wide range of people looking to do everything from connect socially and casually to recreating the cafe experience of increased focus that can come from working in public spaces away from the private context of your home.

Taking that all into account and based upon my own direct experiences of using Bigscreen over the last couple of weeks I can say that Bigscreen VR is definitely the leading contender to becoming one of the first killer applications of VR. It’s a social utility with the potential to connect you to friends, family, romantic, and business partners, as well as complete strangers who spend a considerable amount of time living in the early days of the metaverse.


Support Voices of VR

Music: Fatality & Summer Trip

The post ‘Bigscreen’ Social Computing Space Metrics Show Big Value for VR Power Users appeared first on Road to VR.

‘Neurospeculative Afrofeminism’: Using VR to Build the Future You Want to Live Into

Hyphen Labs is a immersive design collective made up of women of color, and they were showing a sci-fi VR experience at Sundance this year called Neurospeculative Afrofeminism. Their VR experience features black women as some of the pioneers of brain optimization, and you get to experience a futuristic neurocosmotology lab where you can receive transcranial stimulation. As you get this neuroplasticity treatment, you’re transported into a magical world that features speculative products that feature women of color as the center of the design narrative.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Carmen Aguilar y Wedge, Ashley Baccus-Clark, Ece Tankal, and Nitzan Bartov at Sundance where we talked about the process of writing a love letter to black women and creating an experience that helps them live into a future that they want to help create. Wedge cites the quote “You can’t be what you can’t see,” which provided inspiration for them to create an experience where they could create virtual characters within a context of technological innovation in order to directly stimulate neural pathways and re-wire their own brains using the principles of synaptic plasticity.

nsaf vr (1)It’s an open question as to whether or not it will ever be technically possible to do Matrix-style with neural injection where you’re essentially directly hacking the senses. At the MIT Technology conference, I had a chance to see Kernel founder Bryan Johnson who was talking about implanting a chip into the brains of patients with Alzheimer’s or Dementia in order to see if it’s possible to use computer-generated neural coding to help improve the memory of these subjects. MIT graduate student Seongjun Park was able to deliver a “combination of optical, electrical, and chemical signals back and forth into the brain” by using tiny fibers. This three-in-one design claims to enable both genetic, chemical, optical, and electrical inputs and outputs to the brain.

It’s possible that directly interfacing with the brain with computers will be possible and completely mainstream in 50 years. It’s starting with treating brain diseases, will likely move onto cognitive enhancement, and perhaps eventually move to further brain optimizations. But it’s also unknown whether or not direct neural interfaces will prove to be better than using our existing perceptual system inputs that are able to correlate our subjective experiences and emotions into the context of the processing of data inputs.

The principles of embodied cognition indicates that the process of memory creation involves the entire body as well as your contextual environment. There also may be a layer of consciousness and meaning that is a holistic system that proves difficult to directly bypass in order to increase the bandwidth of input into the brain. This is still mostly within the realm of speculative sci-fi, but these recent news items that are paving the pathway toward the type of future that Hyphen Labs is painting within their Neurospeculative Afrofeminism VR experience.


Support Voices of VR

Music: Fatality & Summer Trip

The post ‘Neurospeculative Afrofeminism’: Using VR to Build the Future You Want to Live Into appeared first on Road to VR.

Experiential Poems: Exploring Emotions & Embodied Vulnerability in VR with Cabbibo

cabbibo2Using language to translate an experience into words is one of the highest levels of abstraction that we know as humans. Using the power of visual metaphor through poetry is able to get to deeper levels of emotion, and virtual reality is able to remove nearly all levels of abstraction by tricking your senses into having a direct sensory experience within your body. Indie VR artist Isaac “Cabbibo” Cohen has started to create a sort of ‘Experiential Poem’ using virtual reality to explore how to invoke complicated emotions that transcend words.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Cabbibo at GDC to talk about his process of using VR for emotional exploration. He was previewing a couple of new experiences at the Valve booth including a picture-book VR narrative called Delila’s Gift, and a social VR environment called Ring Grub Island that was designed for mutual exploration and embodied vulnerability.

blarp
Works of Cabbibo’s like ‘Blarp‘ push the boundaries of what a VR experience can and should be

Cabbibo has released four brief VR experiences and games on Steam so far including Blarp, L U N E, Warka Flarka Flim Flam, and My Lil’ Donut, which explore new types of embodied gameplay in VR that begs us to use our bodies in new ways. The experience of building an imaginary fortress in L U N E catalyzed a deep emotional reaction from many users like this one from user ‘Hyperion‘, “Half way through this I crouched to the floor and burst out in tears.”

Cabbibo told me last year that his favorite VR experience to date has been Irrational Exuberance and there haven’t been many other experiences that have inspired him to use his body to explore a space and contemplate the meaning of existence in quite the same way.

Now he’s on his own journey now to create more of these experiential poem-like VR experiences that try to capture the essence of an emotion. After starting therapy last year, he’s been finding VR to be a robust expressive medium for exploring and playing with his own emotional states that is more interesting than his early experiments in embodied gameplay. He’s beginning to explore what it means to explore vulnerability within embodied and social context, and in the end wants to use VR to help people realize how being alive is such a miracle. Cabbibo is doing some of the most groundbreaking explorations of discovering some of the unique affordances of VR as an artistic medium, but more importantly using VR as a mirror to learn more about what it means to be human.


Support Voices of VR

Music: Fatality & Summer Trip

The post Experiential Poems: Exploring Emotions & Embodied Vulnerability in VR with Cabbibo appeared first on Road to VR.

Valve Talks ‘OpenXR’, the Newly Revealed Branding for Khronos Group’s Industry-backed VR Standard

Joe_LudwigValve’s Joe Ludwig talks about the latest updates on the Khronos Group’s VR standardization process that is now being called “OpenXR.” Ludwig says that OpenXR is still primarily creating an open and royalty-free open standard for virtual reality, but that they wanted to plan for the future and eventually accommodate augmented reality as well. In my Voices of VR interview with Ludwig, he talks about the OpenXR standardization process from Valve’s perspective and how they want to see VR become an open of a platform just as the PC has.

LISTEN TO THE VOICES OF VR PODCAST

The OpenXR working group has just completed it’s exploratory process and there are still numerous open debates, and the Khronos Group is making this announcement of a name and logo at GDC in order to encourage more VR headset and peripheral companies to get involved in this standardization process. Ludwig can’t speak on behalf of any OpenXR decisions yet, but was able to provide more insight behind Valve’s motivations in the process, which is to develop a standard that will what they see as a minimal baseline for a quality VR experience as well as to make VR an open platform. OpenXR will also span the full spectrum from 3DoF mobile to 6Dof room-scale, and so there are many active discussions with the working group about what all will be included in the 1.0 specification.

VR is a new computing platform, and this OpenXR standard aims to help keep both VR and AR as open platforms. This Khronos Group OpenXR initiative aims to lower the barriers to innovation for virtual reality so that eventually a VR peripheral company just has to write a single driver to work with all of the various VR headsets. But in order to know what APIs should be available for developers, then this standardization process requires the participation from as many VR companies as possible. Part of the announcement at GDC is to say that the working group has finished their preliminary exploration, and that they’re ready for more companies to get involved.

In my previous interview with Khronos Group President Neil Trevett, he said that this standardization process typically takes about 18 months or so. Given that it was first announced in December 2016, then I’d expect that we might be seeing a 1.0 specification for OpenXR sometime in the first half of 2018. It also depends upon how motivated all of the participants are, and there seems to be a critical mass of major players in the industry to help make this happen and so it could happen sooner.

As to whether or not this OpenXR will mean that any VR headset will work with any VR software, that’s one of the theoretical technical goals but there are many constraints to making this happen. Ludwig said that while technically this could be made possible with OpenXR, there will still be a layer of business decisions around platform exclusives. When talking to Nate Mitchell of Oculus, even if Oculus implements OpenXR then they still want to make sure that it would be a quality experience. Ludwig said that there will be other constraints of having the proper input controls, button configurations, and set of minimal hardware available for some experiences to work properly. It’s also still too early for what the final OpenXR spec will look like for companies to make any specific commitments about cross-compatibility, and I’ll have more details on Oculus’ perspective on OpenXR early next week with a Voices of VR interview with Nate Mitchell.

Overall, I think that this OpenXR is probably one of the most significant collaborations across the entire VR industry. The Khronos Group says that the OpenXR “cross-platform VR standard eliminates industry fragmentation by enabling applications to be written once to run on any VR system, and to access VR devices integrated into those VR systems to be used by applications.” If VR and AR want become the next computing platform, then OpenXR is a key technology to help make that happen.


Support Voices of VR

Music: Fatality & Summer Trip

The post Valve Talks ‘OpenXR’, the Newly Revealed Branding for Khronos Group’s Industry-backed VR Standard appeared first on Road to VR.