Orpheus Technodelics is Publishing Consciousness-hacking VR Experiences

robin-arnottSound Self developer Robin Arnott is starting a publishing label named Orpheus Technodelics to curate and distribute consciousness-hacking VR experiences. ‘Technodelics’, according to Arnott, are digital psychedelics that provide peak experiences aimed at opening someone’s mind to commit to deeper contemplative practices. He sees that spiritual traditions are steeped in level of tradition and seriousness that is a turn-off for a lot of people, and that he hopes to use the insights of game design in order to make transcendent experiences more readily available through virtual reality.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Arnott has been working on the Sound Self for over six years now, and he’s hoping to tap into the larger mindfulness market with other types of similar immersive, consciousness-hacking experiences. He’s working with visionary artist Android Jones whose MicroDose VR is a psychedelic particle painting program that cultivates flow states, as well as a mobile app called Breathscape, which helps cultivate regular breathing practices.

I talk to Arnott about the consciousness-hacking movement founded by Mikey Siegel, his concept of what constitutes a VR ‘technodelic,’ balancing peak experiences with the cultivation of spiritual practices using technology, takeaways from his GDC talk on designing a trance meditation game, and how he navigates his personal mission and role in helping tell the larger story and potential of transcendent technology and its connection to the larger mindfulness market and ecosystem.


Support Voices of VR

Music: Fatality & Summer Trip

The post Orpheus Technodelics is Publishing Consciousness-hacking VR Experiences appeared first on Road to VR.

Indigenous Futurism & Aboriginal Territories in Cyberspace with Jason Edward Lewis

jason-edward-lewisHow do we reckon the past, present, and the future, and what types of possibilities open up when you start to tell stories from the perspective of seven generations from now, or about 150 years into the future?

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

I got to explore these questions in a talk with Jason Edward Lewis about the Aboriginal Territories in Cyberspace initiative, as well as the two 2167 Indigenous Storytelling in VR that he helped to produce. This conversation took place at the Symposium iX conference at the Society for Arts and Technology in Montreal, Canada.


Support Voices of VR

Music: Fatality & Summer Trip

The post Indigenous Futurism & Aboriginal Territories in Cyberspace with Jason Edward Lewis appeared first on Road to VR.

Immersive Education with Google Expeditions, AR, & Virtual Tours

jennifer-hollandGoogle’s mission statement is to “organize the world’s information and make it universally accessible and useful,” and so it’s a natural fit that they’d be a leader in creating educational experiences for AR & VR. Google Expeditions continues to grow its library with 800 new expeditions where they have brought it to 3 million students with their Google Expeditions Pioneer program.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

They will be expanding to adding AR support for Expeditions soon, and there is also a set of virtual laboratory experiences created by Labster’s that schools can use to supplement or replace their existing labs with virtual biology or chemistry labs.

Google also announced Tour Creator, at Google I/O which will allow anyone to create annotated virtual tours with 360 photos that they take or screengrab from Google Street View. These tours can be uploaded to Google Poly where WebXR will be enabled so that these virtual tour experiences can be shared through a URL.

brit-mennutiI had a chance to catch up with a couple of people on Google’s VR/AR Team at Google I/O including Jennifer Holland, who is a program manager for Google Expeditions & Tour Creator as well as Brit Mennuti, a Product Manager for Blocks, Poly, & Tour Creator. I talked with Holland last year at Google I/O, and so she filled me in on everything that’s new with Google’s immersive education initiatives including Google Expeditions, Virtual Tours, and Best Buy’s Google Expeditions Kits.


Support Voices of VR

Music: Fatality & Summer Trip

The post Immersive Education with Google Expeditions, AR, & Virtual Tours appeared first on Road to VR.

Oculus Go + Open Questions Around Facebook, Privacy, Free Speech, & Virtual Governance

madhu-muthukumarThe Oculus Go was released on Tuesday, May 1st at the Facebook F8 developer conference, and it is a self-contained, 3-DoF mobile VR headset priced at $200 that is optimized for media consumption and social VR interactions. Facebook showed off four first-party applications including Oculus TV, Oculus Gallery, Oculus Rooms, and Oculus Venues. The Oculus Venues will be treated as public spaces that will be governed by Oculus’ updated Terms of Service that has a code of conduct to ensure safe online spaces. In order to enforce the code of conduct, then Oculus will need to do some amount of capturing and recording of what happens in these virtual spaces, which has a number of privacy implications and tradeoffs between cultivating safe online spaces that may erode aspects of the freedom of speech and the 4th amendment rights to privacy within these virtual spaces.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Editor’s Note: We’re catching up on publishing a small backlog of Voices of VR episodes. This episode was recorded around the time of the Oculus Go launch back in May, but includes broader discussion about privacy in VR that remains relevant today (and well into the future).

Facebook announced at F8 that it is planning on moderating other Facebook networks through AI moderation, and so it’s likely that Oculus will also eventually try to moderate virtual spaces with AI. What will it mean to have our public virtual interactions mediated by AI overlords? This brings up questions about the limits and capabilities of supervised machine learning to technologically engineer cultural behaviors. The Cleaners documentary at Sundance went behind the scenes of human content moderators of Facebook to demonstrate how subjective the enforcement of terms of service policies can be leaving avant-guard artists susceptible to false positive censorship that results in permanent bans from these communication platforms with no appeals processes. How will AI solve a problem where it’s impossible to define objective definitions of free speech that spans the full spectrum of artistic expression to terrorist propaganda? So Facebook is becoming larger than any single government, but they don’t have the same levels of democratic accountability through democratic models of virtual governance or appeals processes for bans.

SEE ALSO
Oculus Go Review: Standalone VR Priced for the Masses

So while Oculus Go is an amazing technological achievement of hardware, software, and user experience, there are some larger open questions about the role of Facebook and what will happen to our data on this platform. What is their plan for virtual governance? How will they deal with the long-term implications of bans? What does the appeals process look like for false positives of code of conduct violations? What data are being recorded? How will Facebook notify users when they start recording new data or change data recording policies? What data will be sent from Oculus to Facebook? Why don’t these online spaces have peer-to-peer encryption? Does Facebook want to eventually listen into all of our virtual conversations? How will Facebook navigate the balance between free speech and the desires of governments to control speech?

Oculus’ privacy policy is incredibly open-ended, and without a real-time database of what data are recorded, then there is no accountability for users that provides full transparency as to what is being captured and recorded across these different contexts. I received some specific answers to some of these questions in this episode as well as in episode #641 in talking with Oculus’ chief privacy architect, but the privacy policy affords Oculus/Facebook to change what is recorded and where at any moment.

At F8, I had a chance to talk with Oculus Go product manager Madhu Muthukumar about the primary use cases & hardware features of the Oculus Go, but also some of the larger questions about privacy, free speech, virtual governance. Facebook/Oculus seem to be taking an iterative approach to these questions, but they also tend to be very reactive to problems rather than proactively thinking through the long-term philosophical implications of their technologies where they are proactively taking preventative measures.

Facebook is dealing with a lot of trust issues, and their message at F8 was that they’re 100% dedicated to building technologies that connect people despite the risks. Technology can always be abused, but that shouldn’t scare us into not building solutions because Facebook sees that on the whole that they are doing more good than bad. The problem is that Facebook is siphoning our private data, eroding privacy, and their quantified world of social relationships has arguably weakened intimate connections in favor of inauthentic interactions. Facebook claims to want to cultivate community, but they fail to connect the dots for how their behaviors around privacy have eroded trust, intimacy, and will continue to weaken the community and connection they claim to be all about. At the end of the day, Facebook is a performance-based marketing company using the mechanism of surveillance capitalism, and ultimately these financial incentives is what is driving their success and behaviors.

Virtual reality has the potential to move away from the tyranny of abstractions inherent in communicating through the written word, but Oculus has not promised peer-to-peer encryption to ensure that social interactions in VR have the same level of privacy as personal conversations in real life. The are leaving that door open which sends me the message that they like to listen in to everything we say or do in VR. Is that the world that we really want to create? Jaron Larnier argues that it absolutely isn’t. At TED this year, Lanier said, “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them.”


Support Voices of VR

Music: Fatality & Summer Trip

The post Oculus Go + Open Questions Around Facebook, Privacy, Free Speech, & Virtual Governance appeared first on Road to VR.

‘Beat Saber’ Lets You Become the Music Through Puzzles Your Body Solves – Creator Interview

Beat Saber is a rhythm game that plays like an embodied puzzle game which creates a visceral connection to 10 custom music tracks. It’s similar to Soundboxing and Audio Shield, but you use extended lightsabers rather than your fists, which makes you feel like a ninja. I had a chance to talk with the chief programmer Jan Ilavsky and music composer Jaroslav Beck at GDC about the development of Beat Saber, some more details about their scoring algorithm, and where they’re taking it in the future.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

I had a chance to get early practice with the beta version, and I played for an hour a day over the course of a week. It’s an extremely satisfying game that I expect will have a lot of crossover appeal for people who have never tried VR before. The LIV mixed reality streaming integration means that you’re going to be seeing a lot of Beat Saber videos in the next few weeks of people sharing their perfect runs, flow states, and expressions of personality through dancing. Beat Saber really utilizes the best aspects of embodied gameplay that is completely unique to VR, and there is a challenging puzzle aspect with the arrows dictating which direction you need to swipe.

SEE ALSO
'Beat Saber' Early Access Review – a VR Rhythm Game for Budding Jedi Knights

Beat Saber launched in early access on Steam on May 1st, and there’s a lot more content and features sure to come in the future.

Here’s a couple of my runs of Beat Saber on expert mode:


Support Voices of VR

Music: Fatality & Summer Trip

The post ‘Beat Saber’ Lets You Become the Music Through Puzzles Your Body Solves – Creator Interview appeared first on Road to VR.

Matt Miesnieks on the State of the AR Ecosystem

matt-miesnieksMatt Miesnieks is creating an AR cloud with 6D.AI, which aims to “synchronize 3D computer vision data across devices, time and space” in order to enable “persistent content, occlusion, and real-time shared experiences.” Miesnieks has founded a number of AR startups including Dekko, and he’s also a partner with the early-stage AR fund Super Ventures. I had a chance to catch up with Miesnieks back in October 2017 after the Google Pixel 2 announcement to get an update on the state of the AR industry.

LISTEN TO THE VOICES OF VR PODCAST

Miesnieks gives a brief survey of the AR ecosystem, compares other AR solutions to what Microsoft’s HoloLens has accomplished, and lists some of the open problems left to be solved in the AR space (including some of the things that 6D.AI are working on).

SEE ALSO
The 3 Biggest Challenges Facing Augmented Reality Today

Support Voices of VR

Music: Fatality & Summer Trip

The post Matt Miesnieks on the State of the AR Ecosystem appeared first on Road to VR.

VRLA’s John Root on AR, Privacy, & eSports in VR

john-root John Root co-founded VRLA with Cosmo Scharf in 2014, and it has organically grown into a thriving VR event with over 10,000 attendees at the Los Angeles Convention Center. VRLA is a non-profit where funding from sponsors helps to support about half of their exhibitors within the event’s Indie Zone, which helps them feature a lot of innovative independent VR experiments and start-ups. VRLA is happening this year on May 4th and 5th. I had a chance to walk the showroom floor of VRLA 2017 with Root where he took me on a guided tour through the history of VRLA, his thoughts on AR, using VR for film production, the future of privacy in AR/VR, as well as why he is interested in seeing where eSports in VR is headed.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

Here’s the full press release for VRLA 2018 (Road to VR readers can save 15% on passes, details here):

Announcing VRLA 2018 Keynotes & Expo Highlights

The Biggest Immersive Tech Event of the Year Returns May 4-5, Exploring “A New Reality.”

Los Angeles, CA (April 12, 2018) – Experience next generation immersive technologies and explore “A New Reality” at VRLA, the leading virtual and augmented reality expo, May 4-5 at the Los Angeles Convention Center. Industry-leading companies sponsoring this year’s expo include Intel, Dell, Qualcomm, VIVEPORT, Microsoft, Neurogaming and OptiTrack, as well as a full lineup of innovative companies shaping the future of immersive tech. Passes are available now and start at just $30, and for the first time ever attendees can register in VR via the official Payscout VRLA Registration 2018 app.

Saturday’s keynote will feature a live performance by Light Balance, the captivating dance group from “America’s Got Talent.” Equipped with custom-designed suits that integrate a complex light system with unique wireless controllers, Light Balance blends expert synchronization of music, neon lighting and choreography with breathtaking performances. The Saturday keynote lineup also includes:

  • Walt Disney Imagineering SVP Jon Snoddy
  • A world premiere from Skydance, with Chris Hewish, EVP Games & Interactive and Pablo Leon-Luna, VR Developer
  • Special announcements from Intel, with Kumar Kaushik, GM, Virtual Reality/Augmented Reality
  • YouTube star Vsauce3 (aka Jake Roper)
  • A special announcement from Ricoh
  • VR/AR expert and futurist Charlie Fink

VR and AR professionals who purchase the 2-day industry-focused Pro Pass will gain access to Friday’s keynote in the Dell Theater, featuring a mesmerizing performance by Digital Deception, aka Doug McKenzie and Ryan Oakes – an illusionary duo that combines interactive magic with technology for an unforgettable live spectacle. Friday keynote speakers include:

  • Cliff Plumer, CEO, The VOID
  • Rikard Steiber, President Viveport and SVP of Virtual Reality, HTC Vive
  • Gary Radburn, Director, Virtual and Augmented Reality, Dell
  • Hugo Swart, Senior Director of XR Product Management, Qualcomm
  • Katie Kelly, Head of Engagement, AltspaceVR / Microsoft

In addition to Friday’s keynotes, attendees who purchase the Pro Pass will benefit from access to the full expo, shorter demo lines and an exclusive lineup of professional and developer programming – including “VR Valuation: What Drives ROI,” “Artificial Minds in Artificial Spaces,” “Can VR Be Decentralized Using Blockchain?” and “Discovering New Worlds: Using VR/AR/MR for Space Exploration,” a session with NASA’s Jet Propulsion Laboratory.

Saturday sessions feature industry experts from companies including Facebook, Glitched, VIVEPORT, The Third Floor, Vulcan, Kilograph, Framestore, VIBEHub, Walt Disney Animation, TheWaveVR, Chaos Group, Light Sail VR, Sunnyboy Entertainment, OTOY and more.

From location-based entertainment, motion simulators and VR arcades, to next-gen haptics, 360-degree cameras and the hottest product launches across virtual and augmented reality, this year’s expo will feature something for everyone to explore.

In collaboration with LACMA, the breathtaking centerpiece of this year’s show floor will be Mezo, a 20-foot tall futuristic temple equipped with synchronized LED panels, lasers and spatial music. The interactive art installation will evoke an alternate future where ancient Mesoamerican societies have become technologically advanced, taking attendees on a visually and sonically stunning journey through destruction, creation and rebirth.

For horror fanatics, Red Frog Digital’s Zombie Holomaze will offer a terrifying AR experience for attendees to explore while wearing a Microsoft HoloLens. Mega Particle will debut its cross-platform Virtual Poker Table, with live play culminating in a tournament with award-winning poker champion Phil Hellmuth. For zen and psychedelic experiences, attendees won’t want to miss the Visual Reality Zone, featuring mind-bending immersive digital art installations. Additional expo highlights include Xtrematic’s extreme sports simulator, bHaptics’ full-body haptic suit, Bioflight’s VR medical training simulations and Phasespace’s large-scale motion capture system.

For industry professionals, students and attendees looking to cultivate their skills in VR and AR development, Circuit Stream – the official workshop partner of VRLA 2018 – will offer a variety of cutting-edge educational sessions designed to teach Unity development for platforms like Google Daydream, Samsung Gear VR, HTC Vive, Oculus Rift, Windows Mixed Reality, ARKit, ARCore and Microsoft HoloLens. Attendees can sign up for Circuit Stream’s workshops when they register for VRLA. The “Girls Make VR” workshop is also returning to this year’s expo, offering teenage girls 13-18 the opportunity to learn and create with the latest technology behind today’s most popular VR experiences.

For more information on VRLA 2018 and to purchase passes, visit: http://virtualrealityla.com/

 


Support Voices of VR

Music: Fatality & Summer Trip

The post VRLA’s John Root on AR, Privacy, & eSports in VR appeared first on Road to VR.

Oculus’ Privacy Architects Discuss Their Open-ended Privacy Policy & Biometric Data

Oculus will be releasing a new Privacy Policy and Terms of Service tomorrow that will go into effect on May 20th, just five days before the EU’s General Data Protection Regulation (GDPR) privacy law enforcement deadline of May 25th. I had a chance to review the new privacy policy and terms of service as well as talk with the lead privacy policy architect Jenny Hall and a privacy cross-functional team member Max Cohen, who leads product for the Oculus platform.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

jennifer-hall
Jenny Hall

Generally, both the old and new Oculus privacy policies are written in an open-ended way that provides Oculus great leeway in being able to capture and record a lot of different types of data, and the new privacy policy actually adds a number of new passages that allows for new types of data to be collected. Hall & Cohen emphasize that Oculus is committed to transparency and building trust, and that they need this flexibility to account for future applications that haven’t even been imagined yet. But as the line between Oculus and Facebook continues to blur, there are still many open questions for what types of data or biometric information gathered from VR is going to prove to be useful for Facebook’s advertising bottom line.

max-cohen
Max Cohen

In talking with Hall and Cohen, they were able to detail how Oculus is taking a much more conservative approach than a worst-case scenario interpretation of what the privacy policy affords, but up to this point their limited implementations have relied upon a ‘just trust us’ approach with not a lot of transparency on the full range of data that is actually being captured and how it is being stored. Oculus will soon be releasing more GDPR-inspired transparency tools so that users will be able to audit what personal data are being recorded so that users will be able to see for themselves, but these tools still will not reveal everything that Oculus is capturing and recording.

On May 20th, Oculus will be releasing a ‘My Privacy Center’ web interface that will allow users to download a copy of the personal data that Oculus has collected, view the information that Oculus collects when you use their platform, and set privacy settings around who can see your real name, real name search, sharing your Oculus apps & activity, as well as who can see your friends list. Hall and Cohen told me that Oculus is really committed to transparency, and these automated privacy tools will be a huge step in actually allowing users to audit what data are being collected.

The current privacy policy allows users to request to download and review your data, but I found their previous method to be both unreliable and non-responsive. Oculus did not respond to my previous email requests that I sent to privacy@oculus.com in January and March 2017, and so I’m happy to see that the GDPR obligations have catalyzed an automated web interface that will provide immediate access to the private data Oculus has captured. When asked if all of the GDPR obligations will be provided to all of the users around the world, an Oculus PR representative responded, “We are making sure everyone has the same settings, controls, and privacy protections no matter where they live, so not just Europe but globally. The GDPR’s penalties and notification policies are specific to EU law.”

Both the current and new privacy policies are more likely to grant Oculus permissions for what data they can collect than to detail the obligations for how Oculus plans on capturing and storing that data. Hall and Cohen described to me how Oculus takes a tiered approach to privacy where there are at least three major tiers of data that are collected: data that are collected and tied back to personal identity (which they try to limit), data that are de-identified and shared in aggregate (things like physical movements taken at a low sample frequency), and then personal information that is useful for VR and is only stored locally on your machine (like the height of the player).

However, Oculus does not disclose in the privacy policy which tier data will be captured at. For example, in the “Information Automatically Collected About You When You Use Our Services” section, Oculus only says that they collect “information about your environment, physical movements, and dimensions when you use an XR device.” Oculus doesn’t specify that their current recordings of physical movement data are not tied to your identity, that the sample frequencies are too low to fully reconstruct movements, and that it is only presented in aggregate form. This is the type of information that Hall and Cohen provided to me when I asked about it, but Oculus hasn’t disclosed this information in any other way.

The way the privacy policy is written implies that physical movements could indeed be tied to personal identity at as high of a sample frequency as they would want. It’s this level of vague open-ended language that would allow Oculus to capture data at a much high fidelity than they currently are. Because Oculus doesn’t commit to any specifics in the privacy policy, then this means that they don’t have to commit to notifying users if their implementation changes. Currently Oculus isn’t tying physical movements to identity, but that could change next month and there are not any notification obligations that are specified in the privacy policy. The privacy policy merely states that Oculus can record physical movements without being overly prescriptive for how Oculus decides to implement it.

It is worth pointing out that both Hall and Cohen emphasized over and over again that they’re really committed to transparency, and that most of their interpretations of the privacy policy are very conservative. They’re not trying to scare users, but rather build trust with them. Users will be able to have tools in May to be able to verify what data are actually being recorded, and if there is a mismatch of expectations of having way more data that’s captured than users were expecting, then that’ll cause users to lose trust with Oculus. It takes a lot of time to build trust, but it can be lost in a moment and Cohen emphasized that losing trust can be detrimental for Oculus. So I took this message to be on good faith that Oculus’ Privacy Policy needs to be flexible enough for them to be able to provide the services that they are providing, but the privacy policy still only provides limited obligations for what Oculus is committed to providing.

It is likely that this is because Oculus is trying to keep their privacy policy simple in response to GDPR obligations to have human-readable privacy policies that give concrete examples. Hall also said that they’re trying to prevent the policy from exploding into hundreds of pages long. Once downloadable access to what exact data are actually collected and tied to identity will also likely solve some of these problems of having open-ended and vague language in the privacy policy, but it won’t solve all of the transparency issues about what exactly is being recorded.

Continued on Page 2 »

The post Oculus’ Privacy Architects Discuss Their Open-ended Privacy Policy & Biometric Data appeared first on Road to VR.

Updates on the Decentralized Metaverse: WebXR, A-Frame, & ‘Supermedium’

dmarcosI visited Mozilla’s offices last October to chat with A-Frame co-creator & co-maintainer Diego Marcos about the current state of WebVR. Marcos has since left Mozilla in order to work on the Supermedium WebVR browser, which creates a desktop VR browser designed for the Vive or Oculus to easily go in and out of viewing WebVR content as a seamless VR experience. Supermedium is a breath of fresh air to be able to seamlessly traverse amongst a curated set of WebVR proof of concepts, and the link traversal and security paradigms of WebXR are still open questions.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

The open metaverse is going to be built on web standards like the WebXR Device API (formly WebVR), but the larger community of web developers has been waiting to fully commit to building immersive WebVR experiences until there’s universal support in all web browsers. The browsers that have implemented the WebVR 1.1 spec include Firefox Release 55, Oculus Browser, Samsung Internet, & Microsoft Edge. But Google Chrome and the WebVR developer community has been waiting for the official launch of what was being referred to as the WebVR 2.0 spec, but was recently renamed to WebXR Device API in December 2017, which is explained in more detail here.

Mozilla announced their Firefox Reality mixed reality browser last week, which is targeting the standalone VR headsets, primarily the Vive Focus and Oculus Go. It’ll also work on Daydream as well as Gear VR, but it’s going to be designing the immersive web browsing experience where there isn’t a context switch between the 2D screen and context switching into a VR HMD. Firefox Reality hasn’t implemented any WebVR features yet, and it’s currently a proof of concept for showing what browsing 2D web content in VR will look like. The increased resolution of these latest generation mobile VR headsets and upcoming standalone headsets makes reading text a lot easier than it was in previous iterations.

I’ve talked about Firefox Reality in the previous episodes of #350, #471, & #538 when it was still being referred to as the Servo experimental web browser built using the Rust programming language. Firefox Reality is currently the only open source, cross-platform mixed reality browser, and I’m curious to track the development more once they get more of the WebXR features implemented.

In my conversation with Marcos, I’m struck by how many open and unresolved issues still have to be resolved including link traversal, a security model that prevents spoofing sites, the portal mechanics of traversing multiple sites, and the potential of moving beyond a black box WebGL into what would be more like a 3D DOM elements but that has to deal with the additional privacy aspects of gaze and physical movement biometric data that having a 3D DOM would introduce.

It’s been a long journey to the official launch of WebVR, and here’s some of the previous conversations about WebVR since the beginning of the podcast in May 2014.


Support Voices of VR

Music: Fatality & Summer Trip

The post Updates on the Decentralized Metaverse: WebXR, A-Frame, & ‘Supermedium’ appeared first on Road to VR.

Updates on the Decentralized Metaverse: WebXR, A-Frame, & ‘Supermedium’

dmarcosI visited Mozilla’s offices last October to chat with A-Frame co-creator & co-maintainer Diego Marcos about the current state of WebVR. Marcos has since left Mozilla in order to work on the Supermedium WebVR browser, which creates a desktop VR browser designed for the Vive or Oculus to easily go in and out of viewing WebVR content as a seamless VR experience. Supermedium is a breath of fresh air to be able to seamlessly traverse amongst a curated set of WebVR proof of concepts, and the link traversal and security paradigms of WebXR are still open questions.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

The open metaverse is going to be built on web standards like the WebXR Device API (formly WebVR), but the larger community of web developers has been waiting to fully commit to building immersive WebVR experiences until there’s universal support in all web browsers. The browsers that have implemented the WebVR 1.1 spec include Firefox Release 55, Oculus Browser, Samsung Internet, & Microsoft Edge. But Google Chrome and the WebVR developer community has been waiting for the official launch of what was being referred to as the WebVR 2.0 spec, but was recently renamed to WebXR Device API in December 2017, which is explained in more detail here.

Mozilla announced their Firefox Reality mixed reality browser last week, which is targeting the standalone VR headsets, primarily the Vive Focus and Oculus Go. It’ll also work on Daydream as well as Gear VR, but it’s going to be designing the immersive web browsing experience where there isn’t a context switch between the 2D screen and context switching into a VR HMD. Firefox Reality hasn’t implemented any WebVR features yet, and it’s currently a proof of concept for showing what browsing 2D web content in VR will look like. The increased resolution of these latest generation mobile VR headsets and upcoming standalone headsets makes reading text a lot easier than it was in previous iterations.

I’ve talked about Firefox Reality in the previous episodes of #350, #471, & #538 when it was still being referred to as the Servo experimental web browser built using the Rust programming language. Firefox Reality is currently the only open source, cross-platform mixed reality browser, and I’m curious to track the development more once they get more of the WebXR features implemented.

In my conversation with Marcos, I’m struck by how many open and unresolved issues still have to be resolved including link traversal, a security model that prevents spoofing sites, the portal mechanics of traversing multiple sites, and the potential of moving beyond a black box WebGL into what would be more like a 3D DOM elements but that has to deal with the additional privacy aspects of gaze and physical movement biometric data that having a 3D DOM would introduce.

It’s been a long journey to the official launch of WebVR, and here’s some of the previous conversations about WebVR since the beginning of the podcast in May 2014.


Support Voices of VR

Music: Fatality & Summer Trip

The post Updates on the Decentralized Metaverse: WebXR, A-Frame, & ‘Supermedium’ appeared first on Road to VR.