My favorite narrative VR piece that I’ve seen so far this year is Rose Colored by Adam Cosco that premiered at VRLA. It’s a science fiction morality tale that is asking us to look at a potential future of immersive technology. I’m cautious to say much more without spoiling his experience, and so I’d highly recommend checking out Rose Colored either on Vimeo, Facebook, or Samsung VR before listening to this interview.
LISTEN TO THE VOICES OF VR PODCAST
VR Scout recently named Cosco as “one of VR’s Early Auteurs,” and I definitely have to agree. He has a distinct style with a lot to say about the impact of technology on our humanity.
Rose Colored is his fourth VR narrative piece, and the first one written specifically for VR. There’s a lot of profound ideas explored in the piece that really stuck with me afterwards, and I had a chance to dive in and unpack it with Cosco at VRLA in April. We do cover many spoilers in this discussion and so I recommend checking it out first in order to have the full experience.
Sprint Vector is an “adrenaline platformer” racing game that relies upon a unique locomotion technique of swinging your arms in order to run through an obstacle course. It’s the latest game from Raw Data developer Survios, but with a much more light-hearted stylized art aesthetic that has a unique mash-up of a game show, extreme sports, and a competitive racing game with an Sonic-inspired obstacle course. It’s also a unique combination of active exercise with a combination of embodied and abstracted gameplay mechanics.
I had a chance to catch up with Sprint Vector game designer Andrew Abedian at GDC where we talked about the mechanics of racing, the internal habits that are being formed by Survios developers, the intensity of exercise and stamina required to play the game, achieving flow states, and it’s potential to evolve into a competitive eSport. It’s a super fun experience to watch other people play either in a tournament competition in mixed reality or just as a quirky form of VR performance art (examples down below).
LISTEN TO THE VOICES OF VR PODCAST
Sprint Vector is now ready for Beta Sign-ups here.
Here’s a number of examples of different styles of people playing Sprint Vector
AxonVR was awarded US Patent No. 9,652,037 of a “whole-body human-computer interface” on May 16th, which includes an external exoskeleton as well as a generalized haptic display made out of microfluidic technology. I had a chance to demo AxonVR’s HaptX™ haptic display that uses a “fluidic distribution laminate” with channels and actuators to form a fluidic integrated circuit of sorts that could simulate variable stiffness and friction of materials.
At GDC, I stuck my hand into a 3-foot cube device with my palm facing upward. I could drop virtual objects into my hands, and there was an array of tactile pixels that was simulating the size, shape, weight, texture, and temperature of these virtual objects. The virtual spider in my hand was the most convincing demo as the visual feedback helped to convince my brain that I was holding the virtual object. Most of the sensations were focused on the palm on the hand, and the fidelity was not high enough to provide convincing feedback to my fingertips. The temperature demos were also impressive, but also were a large contributor to the bulkiness and size of the demo. They’re in the process of miniaturizing their system and integrating it with an exoskeletal system to have more force feedback, and the temperature features are unlikely going to be able to be integrated in the mobile implementations of their technology.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to talk with AxonVR CEO Jake Rubin about the process of creating a generalized haptic device, their plans for an exoskeleton for force feedback, and how they’re creating tactile pixels to simulate a cutaneous sensation of different shapes and texture properties. Rubin said that that the Experiential Age only has one end point, and that’s full immersion. In order to create something like the Holodeck, then Rubin thinks that a generalized haptic device will unlock an infinite array of applications and experiences that will be analogous to what general computing devices have enabled. AxonVR is not a system that’s going to be ready for consumer home applications any time soon, but their microfluidic approach for haptics is a foundational technology that is going to be proven out in simulation training, engineering design, and digital out of home entertainment applications.
Augmented Reality has played a huge role at the developer conferences for Microsoft, Apple, Facebook, and Google, which is a great sign that the industry is moving towards spatially-aware computing. Microsoft is the only company to start with head-mounted AR with the HoloLens while the other three companies are starting with phone-based AR. They are using machine learning with the phone camera to do six degree-of-freedom tracking, but Google’s Project Tango is the only phone solution that’s starting with a depth-sensor camera. This allows the Tango to do more sophisticated depth-sensor compositing and area learning where virtual objects can be placed within a spatial memory context that is persistent across sessions. They also have a sophisticated virtual positional system (VPS) that will help customers locate products within a store, which is going through early testing with Lowes.
I had a chance to talk with Tango Engineering Director Johnny Lee at Google I/O about the unique capabilities of the Tango phones including tracking, depth-sensing, and area learning. We cover the underlying technology in the phone, world locking & latency comparisons to HoloLens, virtual positioning system, privacy, future features of occlusions, object segmentation, & mapping drift tolerance, and the future of spatially-aware computing. I also compare and contrast the recent AR announcements from Apple, Google, Microsoft, and Facebook in my wrap-up.
LISTEN TO THE VOICES OF VR PODCAST
The Asus ZenPhone AR coming out in July will also be one of the first Tango & Daydream-enabled phones.
There’s an invisible cyber war that’s happening between major nation states, and Zero Days VR takes you inside of it in a completely new way using virtual reality. You go on a journey into a hyper-stylized cyberspace world where you embody the Stuxnet computer virus as it navigates programmable logic controllers, changes code, and destroys Iranian nuclear centrifuges.
LISTEN TO THE VOICES OF VR PODCAST
Zero Days VR is one of the most powerful VR documentaries that I’ve seen so far since it uses the unique affordances of VR to visualize what’s at stake for weaponizing security vulnerabilities, and it uses these volumetric affordances to innovate what’s possible in immersive storytelling. The end result is a visceral and embodied experience of an otherwise complex and abstract topic of cyber warfare that is probably one of the most important stories in our world today.
Zero Days VR is based upon the journalistic work of Alex Gibney’s Zero Days documentary, but it’s not a promotional experience for the movie but rather a self-contained experience that uses VR to tell aspects of the story that didn’t work as well in the 2D version. The VR experience tells the story as if the main character is code, and they created different immersive environments that reflected testimony from a range of computer experts as well as a number of official government denials.
At Sundance, I had a chance to talk with Scatter’s Creative Director Yasmin Elayat about directing Zero Days VR, and how this project came about through the use of their Depthkit technology in Gibney’s documentary. We also talked about their failed experiments to make this into a non-linear and interactive experience. It turned out that too much journalistic integrity and overall context was lost when they surrendered control over the linear release of evidence, and so they had to abandon the more interactive components of the experience that they were building off from their previous experience on interactive VR doc CLOUDS created by James George and Jonathan Minard.
Zero Days VR was released on June 8th on Oculus Home for both the Oculus Rift and Gear VR, and it also won an award for Narrative Achievement at Unity’s Vision VR/AR Awards.
Funomena’s Woorld won the ‘Best AR Experience’ category at the recent 2017 Google Play awards. In the game you scan your room with a Google Tango-enabled phone, and then you’re encouraged to decorate your space with extremely cute art and characters designed by Katamari’sKeita Takahashi. Part of the gameplay in Woorld is to figure out how to combine different objects together in order to unlock new objects and portions of the story in your space.
LISTEN TO THE VOICES OF VR PODCAST
Funomena had to innovate on a lot of augmented reality user interaction paradigms and spatial gameplay in designing this game. I had a chance to catch up with Funomena co-founder and CEO Robin Hunicke at Google I/O to talk about her game design process, as well as her deeper intention of bringing sacredness, mindfulness, calmness, worship, spirituality, love, empathy, and kindness into your environment through augmented reality technology. She takes a lot of inspiration from Jodorowsky’s The Technopriests as well as the sci-fi novel Lady of Mazes by Karl Schroeder.
Hunicke also sees that there’s a split that’s emerging between the commercial VR and the indie VR scene with the character of content that’s being funded, and she talks the importance of supporting indie game creators.
Testimony is one of the most profound and powerful applications of virtual reality that I’ve seen so far. It’s an experimental documentary that captures the stories of sexual assault from five women broken up into five segments. You’re completely immersed within a virtual sphere with these five stories that are represented as sequences of circles on different lines. As you look at a specific circle, it comes into the full frame and plays a 2D video segments from that victim either sharing their story of sexual assault, the aftermath, their process of healing, or their ideas for how to reform the criminal justice system. The depth of immersion and intimacy that the virtual reality medium enables allows you have much more capacity to provide your full attention and to bear witness to these stories of deep emotional intensity. It’s a radical application of VR that represents a revolutionary approach to healing from trauma.
LISTEN TO THE VOICES OF VR PODCAST
Testimony premiered at Tribeca in April, and I had a chance to catch up with Zohar Kfir to talk about the challenges and shame that sexual assault survivors experience. We also talk about how the virtual reality medium is uniquely suited to provide a platform and medium for sexual assault survivors to share their stories of survival. It’s been a profoundly healing experience for these women to authentically share the emotional intensity of their sexual assault experience, as well of the challenges in dealing with the criminal justice system, and process of healing from trauma.
Testimony shows that virtual reality is able to carry a depth of emotional intensity of trauma that previous mediums where maybe not as well suited for. The interactive nature of Testimony provides the affordance of being able to look away from a testimony story if becomes too intense, and it’ll stop playing and you’ll retreat back into the sphere of women metaphorically standing in solidarity with each other.
I think that it’s an experience that would be difficult to pull of in previous 2D mediums, and I think that it demonstrates how VR has the unique capacity to discuss the types of trauma that was only previously discussed behind closed doors in the context of a therapy session. The level of emotional intimacy and presence that you can achieve in VR allows for a reciprocal transmission and reception of topics that have been either too taboo or intense for previous communications mediums.
Kfir also has plans to make keep this project going as a living and interactive document. So other sexual assault victims will be able to record their stories of sexual assault and contribute them to the project where they can be witnessed and heard. Providing a platform for having your sexual assault trauma being heard, witnessed, and believed is going to have profound healing implications for the women who participate. It’s a form of distributed and asynchronous Truth and Reconciliation process that will allow victims to release their shame, humiliation, and trauma around being sexually assaulted.
There’s still a long ways to go to reform the criminal justice system around cases of sexual assault, but Project Callisto that was recently announced. It allows victims to report the details of their sexual assault and their perpetrator online. If there are multiple reports against the same person, then it will trigger the criminal justice process and optionally connects the women. This is a huge improvement in the current process, and seems to be a model that has been gaining some traction in other countries.
Testimony is now available as of June 1st, 2017, and it’s one of the most profound and moving experiences that I’ve had in VR so far. Definitely check it out, and share it with your friends and family. You can learn more information from their Testimony website, or follow online with the #ShatterTheSilence hashtag.
Kyle Russell is on the deal & research team for venture capital firm Andreessen Horowitz (aka A16z) where he’s focusing on investing in technologies ranging from virtual & augmented reality, artificial intelligence, drones, and other exponential technologies like quantum computing. A16z has invested in a number of prominent VR companies including Oculus, Magic Leap, Within, BigScreen as well as Lytro, & Improbable.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to catch up with Russell about how these exponential technologies are combining together in order to solve real problems. Russell used to write for TechCrunch, and so in order to keep up with the latest tech trends he’s become a power user of Twitter lists, subreddits like /r/MachineLearning, and an AI-research aggregator called the Arxiv Sanity Preserver. He also is tracking many different possible futures when it comes to emerging business models based upon the decentralized blockchain, self-sovereign identity initiatives such as the Decentralized Identity Foundation, as well as thoughts on the growing wealth disparity and blending of cooperative and competitive economic approaches.
Google’s overarching mission is to organize all of the world’s information, and so it’s a natural fit for the company to be one of the leading innovators for using VR for immersive education. Google Expeditions was born out of a hackathon soon after the Google Carboard launched back at Google I/O 2014, and it’s since been shared with over 2 million students who have gone on virtual field trips. At I/O last week, the company had Tango demos that showed me just how compelling augmented reality is going to be in the future of collaborative & embodied educational experiences.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to catch up with Daydream’s Education Program Manager, Jennifer Holland, at Google I/O where we talked about the history of Expeditions, and how successful it’s been in creating new levels of immersion and engagement with students. She talks about how the Expedition experiences are designed to be agnostic to any specific age or subject matter, but also independent of specific teaching strategy or philosophy.
Google has been rapidly iterating on creating useful tools that are immediately useful for teachers to introduce immersive experiences into their lesson plan, and there’s a lot that is left up to the teacher to be able to guide and direct the interactions and group learning exercises.
Holland also talks about some of the tools that have been built into expeditions, as well as the feedback that is driving the future of immersive education towards shared augmented reality experiences with Tango-enabled devices.
One of Google’s biggest strengths in the VR community is cultivating mental presence by using open web technologies to fuse together information about our world so that we can experience it in a new way. Google Earth VR is a perfect example of fusing many different sources of data about our world, and providing an entirely new immersive experience of it in VR.
Right now, Google’s Expeditions team and their collaborators are the only ones who are creating educational experiences, but they’d like to eventually make it easier at some point for people to create their own Expeditions. The Google Expeditions team announced during their Google I/O session that they’ve been using Mozilla’s WebVR framework, A-Frame, in order to rapidly prototype Expeditions experiences in VR, and Unity to prototype experiences in AR.
I expect that WebVR and WebAR technologies will be a critical part of Google’s VR & AR strategies, as they’re helping to drive the standardization process with the work of WebVR primary spec author Brandon Jones. AR has the advantage over VR that the students faces aren’t occluded, and so there is a bit more collaborative learning and interaction between students, which you can see from this video of Expeditions AR:
My direct experience of seeing the Tango AR experiences at Google I/O is that the 6DoF inside out tracking is so good that it’s possible to feel a sense of virtual embodiment as you walk around virtual objects locked in space. I haven’t been able to experience this level of quality tracking in phone-based AR before, and so it was really surprising to feel how immersive it was. You’re able to completely walk around virtual objects, which triggers a deeper level of embodied cognition in being able to interact and make sense of the world by moving your body.
Embodied Cognition is the idea that we don’t just think with our minds, but that we use our entire bodies and environments to process information. I feel that the world-locking capabilities of the Tango-enable phones start to unlock the unique affordances of embodied cognition that usually comes with 6DoF positional tracking, and it was a lot more compelling that I was expecting it to be. But after seeing the Tango demos, I feel confident in saying that AR is going to be a huge part of the future of education.
The Google Cardboard or Daydream hasn’t generated a lot of excitement from the larger VR community as they’re seen as the gateway immersive experiences to have higher-end, PC-driven experiences. But Google’s ethic of rapidly iterating and creating a minimum viable products that are highly scalable has given them over two years of direct experiences of innovating with immersive education. They’ve been able to reach over 2 million students, and they’ve also been doing a number of research pilot studies with these VR expeditions. Google researchers Matthew Kam and Jinghua Zhang presented some of their preliminary research at the IEEE VR Embodied Learning Workshop in March, and you see some of the highlights in this Twitter thread, including work that’s happening to create an immersive education primer for Circle Center.
I’m really excited to see how Google continues to innovate with immersive education, and you can look forward to seeing a solo version of Expeditions on Daydream that will be released soon that features guided tours, history lessons, and science explainers. What Google is finding is that Expeditions is not just for students, but also adults for casual and continuing education, enterprises for training applications, and even Major League Baseball have started to explore how to use immersive education experiences to engage audiences in a new way. At the end of the day, Google is showing that if you want to expand your mind and learn about the world, then Daydream & Expeditions are going to have some killer apps for you.
For more information on embodied cognition, then be sure to check these previous interviews:
Mozilla’s mission statement is to ensure that the Internet remains a global public resource, open and accessible to all, and they’ve been helping to bring VR to the web for the past three years. A-Frame is an open source framework for creating WebVR content that has gained a lot of momentum over the last year with more participants on the A-Frame Slack channel than the official WebVR Slack.
LISTEN TO THE VOICES OF VR PODCAST
Diego Marcos
I had a chance to catch up with A-Frame core developers Diego Marcos & Kevin Ngo at the IEEE VR conference in March to get an overview of A-Frame, and how it’s driving WebVR content and innovations in developer tools. Mozilla is also planning on shipping WebVR 1.1 capabilities in the desktop version of Firefox 55, which should be launching later this year in August.
Kevin Ngo
Mozilla believes in open source and the open web, and they have a vibrant and very supportive community on the A-Frame Slack that is very helpful in answering questions. Ngo has been curating the weekly highlights from the A-Frame community for over a year now posting the latest experiences, components, tools, and events in his Week in A-Frame series on the A-Frame blog, which has helped to grow the A-Frame community
A-Frame uses a entity-competent model that’s very similar to Unity’s model where you spatially position 3D components within a scene, and then you add behaviors and scripts that drive the interactive behavior. There’s a visual editor to move objects around in a scene, and a VR-editor is on the roadmap to be able to put together WebVR scenes in A-Frame while being in VR. There’s an open source collection of components that is being officially curated and tested in the A-Frame registry, but there’s also various collections of interesting components on GitHub repositories such as these Awesome A-Frame components or this KFrame collection of components and scenes. collection of components and scenes.
Google even announced at Google I/O that they’re using A-Frame in order to rapidly prototype Google Expeditions experiences. WebVR and A-Frame is a perfect combination to serve Google’s mission of trying to organize all of the information in the world. The strength of the open web is that you’re able to mash-up data from many different sources, and so there are going to be a lot of educational and immersive experiences focusing on mental presence built on top of WebVR technologies.
In my interview with WebVR spec author Brandon Jones, he expressed caution of launching the Google Chrome browser with the existing WebVR 1.1 spec because there were a lot of breaking changes that will need to be made in the latest “2.0” version order to make the immersive web more compatible for both virtual reality and augmented reality. Because Chrome is on over 2 billion devices, Jones said that they didn’t want to have to manage this interim technical debt and would prefer launching a version that’s going to provide a solid future for the immersive web.
Some WebVR developers like Mozilla’s Marcos and Ngo argue that not shipping WebVR capabilities in a default mainstream browser has hindered adoption and innovation for both content and tooling for WebVR. That’s why Mozilla is pushing forward with shipping WebVR capabilities in Firefox 55, which should be launching on the PC desktop in August.
Part of why Mozilla can afford to push harder for earlier adoption of the WebVR spec is because the A-Frame framework will take care of the nuanced differences between the established 1.1 version of the WebVR spec and the emerging “2.0” version. Because A-Frame is not an open standard, they can also move faster in rapidly prototyping tools around the existing APIs to enable capabilities, and they can handle the changes in the lower-level implementations of the WebVR spec while keeping the higher-level A-Frame declarative language the same. In other words, if you use the declarative language defined by A-Frame, then when the final WebVR spec launches then you’ll just have to update your A-Frame JavaScript file, which handles the spec implementation and allows you to focus on content creation.
Mozilla wants developers to continue to develop and prototype experiences in WebVR without worrying that they’ll break once the final stable public version of WebVR is finally released. Mozilla is willing to manage the interim technical debt from the WebVR 1.1 spec in order to bootstrap the WebVR content and tooling ecosystem.
Mozilla is also investing heavily in a completely new technology stack with their Servo browser, which could eventually replace their mobile Firefox technology stack. Marcos previously told me that Servo is aiming to be built to support immersive technologies like WebVR as a first-class priority over the existing 2D web. Servo has recently added Daydream support with GearVR support coming soon. They’ve shown a proof of concept of a roller coaster Daydream app built in three.js that runs natively as a native application within Daydream.
Overall, Mozilla believes in the power of the open web, and wants to be a part of building the tools that enable metaverse that’s a public resource the democratizes access to knowledge and immersive experiences. There’s a lot of questions around concepts like self-sovereign identity, how an economy is going to powered by some combination of crytpocurrencies and the Web Payments API, as well as the concepts of private property ownership and how that might be managed by the blockchain. A lot of the concepts of a gift economy that Cory Doctorow explores in “Walkaway” are being actively implemented by Mozilla through the open source creation of the Metaverse, and everyone in the WebVR community is looking forward to a stable release later this year. For Mozilla, that begins in August with Firefox 55, but this is just the beginning of a long journey of realizing the potential of the open web.