Updates on the Decentralized Metaverse: WebXR, A-Frame, & ‘Supermedium’

dmarcosI visited Mozilla’s offices last October to chat with A-Frame co-creator & co-maintainer Diego Marcos about the current state of WebVR. Marcos has since left Mozilla in order to work on the Supermedium WebVR browser, which creates a desktop VR browser designed for the Vive or Oculus to easily go in and out of viewing WebVR content as a seamless VR experience. Supermedium is a breath of fresh air to be able to seamlessly traverse amongst a curated set of WebVR proof of concepts, and the link traversal and security paradigms of WebXR are still open questions.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

The open metaverse is going to be built on web standards like the WebXR Device API (formly WebVR), but the larger community of web developers has been waiting to fully commit to building immersive WebVR experiences until there’s universal support in all web browsers. The browsers that have implemented the WebVR 1.1 spec include Firefox Release 55, Oculus Browser, Samsung Internet, & Microsoft Edge. But Google Chrome and the WebVR developer community has been waiting for the official launch of what was being referred to as the WebVR 2.0 spec, but was recently renamed to WebXR Device API in December 2017, which is explained in more detail here.

Mozilla announced their Firefox Reality mixed reality browser last week, which is targeting the standalone VR headsets, primarily the Vive Focus and Oculus Go. It’ll also work on Daydream as well as Gear VR, but it’s going to be designing the immersive web browsing experience where there isn’t a context switch between the 2D screen and context switching into a VR HMD. Firefox Reality hasn’t implemented any WebVR features yet, and it’s currently a proof of concept for showing what browsing 2D web content in VR will look like. The increased resolution of these latest generation mobile VR headsets and upcoming standalone headsets makes reading text a lot easier than it was in previous iterations.

I’ve talked about Firefox Reality in the previous episodes of #350, #471, & #538 when it was still being referred to as the Servo experimental web browser built using the Rust programming language. Firefox Reality is currently the only open source, cross-platform mixed reality browser, and I’m curious to track the development more once they get more of the WebXR features implemented.

In my conversation with Marcos, I’m struck by how many open and unresolved issues still have to be resolved including link traversal, a security model that prevents spoofing sites, the portal mechanics of traversing multiple sites, and the potential of moving beyond a black box WebGL into what would be more like a 3D DOM elements but that has to deal with the additional privacy aspects of gaze and physical movement biometric data that having a 3D DOM would introduce.

It’s been a long journey to the official launch of WebVR, and here’s some of the previous conversations about WebVR since the beginning of the podcast in May 2014.


Support Voices of VR

Music: Fatality & Summer Trip

The post Updates on the Decentralized Metaverse: WebXR, A-Frame, & ‘Supermedium’ appeared first on Road to VR.

The Yang and the Yin of Immersive Storytelling with Oculus’ Yelena Rachitsky

yelena-rachitskyThe future of VR storytelling will be immersive and interactive. Yelena Rachitsky is an executive producer of experiences at Oculus, and she’s been inspired by how interactive narratives have allowed her to feel like a participant who is more engaged, more present, and more alive. The fundamental challenge of interactive narratives is how to balance the giving and receiving of making choices and taking action vs. receiving a narrative and being emotionally engaged and having an embodied experience of immersion and presence. Balancing the active and passive dimensions is the underlying tension of the yang and yin of any experience.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST


The boundaries between what is a game and what is an immersive story will continue to be blurred, but Rachitsky looks at the center of gravity of an experience. Are you centered in your embodied experience and emotional engagement of a story (yin)? Or are you centered in your head of thinking about the strategy of your next action in achieving a goal in a game (yang)?

She’s recommends that experiential designers start with more yin aspects of an experience including the feeling, the colors, the space, and the visceral sensory experience of a story that you’re primarily telling directly to someone’s body. She’s also been finding a lot of inspiration and innovation of the future of storytelling from immersive theater, where actors are able to use their body language to communicate unconsciously with the audience and use their bodies moving through space in order to drive specific behaviors. The Oculus-produced Wolves in the Walls used immersive theater actors from the production Then She Fell in order to do the motion capture, and to help tell the spatial story using the body language of an embodied character in the story.

I had a chance to catch up with Rachitsky at Sundance this year where Oculus had five different experiences including Dispatch, Masters of the Sun, Space Explorers, Spheres, & Wolves in the Walls. Rachitsky has been key in helping to discover immersive storytellers and supporting projects that push the edge of innovation when it comes to the future of interactive storytelling. She says that the biggest open question that is driving her journey into immersive storytelling is “How can you be passive and active at the same time?”

Rachitsky says that immersive storytelling isn’t about the beginning, middle, or end, but rather it is about cultivating an experience that you have, and it’s about the story that you tell yourself after you take the headset off. This matches some of the depth psychological perspectives on immersive storytelling that John Bucher shared in his Storytelling for Virtual Reality book where VR storytelling could be used as a technological as a vehicle for inner reflection and contemplation.

I suspect that the focus on embodiment and the audience’s direct experience is part of a larger trend towards a new forms of storytelling that transcend the Yang Archetypal journey of Joseph Campbell’s Hero’s Journey, and VR and AR are more about a more receptive Yin Archetypal Journey that I would say is more non-linear, cyclical, embodied, sensory, centered in your own experience, environmental, nurturing, receptive, cooperative, community-driven, worldbuilding, depth psychological, connective, transcendent, esoteric, & alchemical.

The exact patterns and underlying structures of this more yin archetypal journey are still be explored in VR stories, but there’s likely a lot of inspiration that might come from kishōtenketsu literary structures found in classic Chinese, Korean and Japanese narratives that focus more on conflict-free stories of cooperation, collaboration, and revealing holistic interconnections of how the totality is greater than the sum of all of the individual parts.

I’ve recorded nearly 100 interviews on the future of immersive storytelling now (here’s a list of the Top 50 from 2016), and a consistent theme has been this underlying tension of giving and receiving where there is a striving for a balance of the active and passive experience. I find that the concepts of the yang and the yin from Chinese philosophy and the four elements from natural philosophy provide compelling metaphors to talk about this underlying tension.

Using metaphors from natural philosophy, the fire element (active presence) and air element (mental & social presence) are yang expressions of exerting energy outward while the water element (emotional presence) and earth element (embodied & environmental presence) are more yin expressions of receiving energy internally. My keynote on from the Immersive technology Conference elaborates on how these play out in the more yang communications mediums like videos games and more yin communications mediums of film and VR.

Video games focus on outward yang expressions of making choices and taking action while film focuses on inward yin expressions of receiving an emotionally engaging story. VR introduces the body and direct embodied sensory experience, but it’s possible that this focus on embodiment and presence helps to create new expressions of yin archetypal stories that have otherwise been impossible to tell.

Most of my recent conversations about VR storytelling from Sundance 2018 & the Immersive Design Summit have been focused on this emerging yin archetypal journey of how embodiment & presence are revealing these new structures of immersive storytelling:

The concept of a “Living Story‘” from the Future of Storytelling’s Charlie Melcher is very similar to what The VOID’s Camille Cellucci calls “Story-Living,” which is about “creating spaces and worlds where people have a chance to live out their own stories within a framework that we design.” The recently released Ready Player One movie did not include some of the ‘story-living’ live action role playing scenes that were included within the novel, but Ernest Cline was definitely attuned to the trends towards immersive narratives when his novel came out in 2011, which is the year that the Punchdrunk immersive theater production Sleep No More opened up in New York City.

Whether it’s a living story or story-living, both involve becoming an active participant and character within the story that’s unfolding. AI is going to play a huge role in helping to resolve some of this tension between authorial control of the story and creating generative possibility spaces, and it’s something that I’m starting to explore in the Voices of AI podcast with interviews with AI storytelling pioneer Michael Mateas, AI social simulator designer & improv actor Ben Samuel, and AI researcher/indie game developer Kristin Siu. Oculus’ Rachitsky is looking forward to integrating more and more AI technologies within future VR storytelling experiences, and she’s even experimenting with using live actors randomly appearing within some future VR experiences that she’s working on.

I expect that the underlying tension between giving and receiving, active and passive, and the yang and the yin to continue to be explored through a variety of different immersive storytelling experiences. While Ready Player One explores a typical Yang Archetypal Journey in the style of Campbell’s monomyth, these types of active gaming and mental puzzle-solving experiences may look great on a film screen, but they’re not always compelling VR experiences that amplify the unique affordances of immersion and presence in VR.

I predict that immersive storytellers will continue to define and explore new storytelling structures that I expect will initially be focusing these more Yin Archetypal Journey of immersion and presence. There will continue to be a fusion of traditional storytelling techniques from cinema, but it’s possible that VR stories need to completely detach from the paradigms of storytelling that tend to focus on conflict, drama, and outward journeys.

It’s possible that the Kishōtenketsu story structures from Eastern cultures might work well in VR as they focus on more cooperative and conflict-free stories that focus on the Gestalt of interconnectivity. It’s also likely that if there does turn out to be a fundamental Yin Archetypal Journey structure that’s different than the Campbell’s monomyth that it’s likely that these stories have been ignored and overlooked, and that it’s possible that the mediums of VR and AR have been needed in order to provide people with an embodied, direct experience of these types of stories.

Eventually we’ll be able to find a perfect balance of the yang and the yin in immersive stories, but perhaps before we get this perfect balance then we’ll need focus on these Yin Archetypal Journey of immersion and presence. Once we open our minds about what the optimal structures for embodied stories that center us in our experiences, then I expect more of a seamless integration of live-action role play, gaming elements, social interactions, and collaborative stories.


Support Voices of VR

Music: Fatality & Summer Trip

The post The Yang and the Yin of Immersive Storytelling with Oculus’ Yelena Rachitsky appeared first on Road to VR.

‘VRChat’ is the Closest VR Experience Available Today to ‘Ready Player One’s’ OASIS

Ready Player One was released on March 29th, and the current VR experience that comes the closest to realizing the vision of the OASIS depicted in the movie is the social VR experience of VRChat. By 2045, I predict that we’ll have a decentralized open metaverse that’s built on WebXR, and that it’ll be the open web that realizes the educational potential of OASIS that’s depicted within the novel version of Ready Player One. But I think we’ll continue to have open and closed systems just as we do today, and it’s likely that there will a closed, walled garden metaverse of interconnected worlds that is more akin to the vision of the OASIS that’s built by Gregarious Simulation Systems. But if you want to experience the OASIS today, then VRChat is the experience to check out.

LISTEN TO THE VOICES OF VR PODCAST


I had a chance to talk with VRChat’s CEO and co-founder Graham Gaylor and Chief Creative Ron Millar at GDC where we talked about the recent growth of VRChat, what type of trends they’re seeing, what they’re doing to support streamers, and where they’re going in the future.

VRChat was one of the first social VR experiences to enable fully customizable avatars, but they also allow creators to upload their own worlds. VRChat went through some exponential grown in the Fall of 2017 thanks in part to a number of YouTubers including Jameskii & Nagzz21, and Twitch personalities like pokelawls, dyrus, greekgodx, and LIRIK who discovered the unique user-generated VRChat worlds, customized avatars, a fusion of pop culture references & memes, as well as opportunities for live action role play, cosplay, social games, and serendipitous social interactions in these virtual worlds.

SEE ALSO
‘VRChat’ Virality Runs Its Course After 3M Installs, Healthy Concurrent Usership Remains

VRChat has the most advanced friend-finding features of any Social VR experience, but they also have the user experience of traversing between virtual worlds with a group of people by dropping in-world portals. They also have a diverse range of different private social VR experiences including invite-only instances or instances where anyone invited can invite their friends.

VRChat has enabled a 2D desktop version of their experience allows non-VR PC players to participate in the social VR worlds, but with limited functionality including not being able to use their hands. Not all of the concurrent users on VRChat are in virtual reality, but it’s an experience that’s been inspiring a lot of people to buy a VR system in order to enhance their experience in the game. This support for non-VR participation has contributed to their rise in popularity, and this success means that they’ve had to deal with various moderation challenges that come with operating virtual social spaces at this scale.

There aren’t any VR experiences yet that realize the full potential of the ‘OASIS’, the VR metaverse depicted in the novel Ready Player One, as there are a lot more educational aspects to the OASIS that didn’t make the cut for the movie version. I think that it’s most likely the these types of educational experiences will be built on the open web using open standards like WebXR, and currently Google’s Expeditions is the probably the closest platform that is realizing the more educational aspects of the OASIS. It was encouraging to hear the Google Expeditions team talk about using A-Frame to build experiences that target the web, and so by 2045 I expect that there will be a combination of centralized walled gardens as well as decentralized open metaverse worlds that are built upon open standards like WebXR.

VRChat has opted to create the best experiences possible today by creating a centralized solution where they are hosting all of the content, and they’re in charge of moderating content and social behaviors. The long-term business model for VRChat hasn’t been announced to the community yet, but Chief Creative Officer Ron Millar told me that they want to be sure that there’s a way for the most successful content creators to be compensated for their creations.

graham-gaylorCEO Graham Gaylor said that VRChat is open to exploring various decentralized solutions if it’s something that the community of users starts to ask for, but they’re currently focused on creating the best user experience that’s possible using the technologies that are out there. They’ve been focused on supporting Unity with their SDK which has allow existing VR creators to jump into creating worlds for VRChat.

AltspaceVR actually put a lot of engineering effort to be able to integrate the open web into their platform. JanusVR has the most advanced implementations of seamlessly integrating with the open web, and Philip Rosedale’s High Fidelity has always taken a hybrid approach of blending together a centralized and decentralized approach. An updated version of the networked A-Frame plug-in was just released, and it should provide a foundation for social VR experiences on the open web. These more decentralized solutions take more time, effort, and energy to design experiences for, and they’re using web technologies that still aren’t consistently at a level of quality that native applications can run. High Fidelity is probably the closest to achieving experience parity with their framework that’s built using JavaScript as the primary coding language, but it still doesn’t have the same consistency or quality as native code—even though it’s rapidly improving all the time.

It’ll be interesting to see how VRChat continues to grow and expand, and they have a publicly listed set of feature requests listed on their VRChat feedback page that provides a sense of their roadmap and popular feature requests. After WebXR 1.0 launches later this year, then I expect to see some of the more decentralized open web approaches to start to rapidly improve, but I expect that it is still going to take a number of years before these open web experiences will be able to catch up to the quality of the social VR experience that is currently provided by VRChat.

But Rosedale believes in the power of Metcalf’s Law, which says that the value of a network increases with the square of the nodes that are included in this network. This means that as more people start creating experiences for the open web, then the network effects start to make the value of the open web exponentially more valuable. Rosedale expects to see the evolution of the metaverse mirror what happened when people started to use the open web over the more curated and polished, centralized walled gardens of AOL, Compuserve, and CD-ROMS.

The recent privacy backlash against centralized companies like Facebook is an indication that the general public is starting to realize the dangers of a centralized entity growing to the scale of billions of users, and the antidote is decentralized architectures that protect user privacy. Chris Dixon’s essay “Why Decentralization Matters” documents how most of the VC funding and smartest entrepreneurs in Silicon Valley are starting to build decentralized systems based on the blockchain, and so there’s a larger trend in the tech industry that’s starting to focus more on building a more viable decentralized economy that’s not as susceptible to a handful of centralized players like Facebook or Google from completely dominating the online ad marketplace thanks to their business models of surveillance capitalism.

Another recent announcement at GDC was High Fidelity & JanusVR announced the Virtual Reality Blockchain Alliance that emphasizes portable identities, registering assets, and acceptance of virtual currencies. It’s these types of decentralized architectures that are more forward-looking, and are building for the future, but VRChat’s Gaylor says that it has taken these companies a lot longer to build a user experience that’s been as compelling as what VRChat has been able to create. VRChat has opted for pushing the boundary as to what is even possible that inspires the decentralized solutions to have a design goal.

So in conclusion, I think that VRChat is still the closest VR experience that exists today that starts to realize the vision of the OASIS depicted in the Ready Player One movie, but I suspect that by 2045 that the open web built on top of the WebXR open standards are going to realize the full vision of the open metaverse and the educational potential of the OASIS that’s depicted within the novel. Both are important parts of creating the social VR metaverse that we all want to see in the future, and hopefully the dystopian sci-fi visions of the future depicted in Ready Player One can help us recognize the downfalls of centralized power, and inspire us to build an open and sustainable metaverse on a decentralized architecture that we all deserve.


Support Voices of VR

Music: Fatality & Summer Trip

The post ‘VRChat’ is the Closest VR Experience Available Today to ‘Ready Player One’s’ OASIS appeared first on Road to VR.

A Candid Assessment of the Industry From a Leading Indie VR Developer

denny_headshot-200x200What is happening with the VR industry? Is adoption and growth still on target to support a vibrant and diverse ecosystem of independent VR developers? Leading headset manufacturers have not been transparent in sharing any specific information, and analyst datum that’s been released hasn’t been a reliable or comprehensive source of information. So it’s been difficult to get an honest and candid assessment about the overall health and vibrancy of the virtual reality ecosystem. But there are a few companies who have some deeper insights into the VR ecosystem, and these are the independent VR development companies who have released best-selling VR titles.

LISTEN TO THE VOICES OF VR PODCAST

Denny Unger of Cloudhead Games has the unique perspective of having a VR launch title, The Gallery: Call of the Starseed (2016), bundled with the HTC Vive, and then a year and a half later, the sequel, The Gallery: Heart of the Emberstone (2017).

There was a significant dropoff of sales of their second title from the first title, and so Unger has the experience of going from the ‘Peak of Inflated Expectations’ on the Gartner Hype Cycle down into the ‘Trough of Disillusionment’. He expects that VR will turn the corner within the next couple of years, and that focusing on producing smaller experiences aimed at the VR Arcade market is a going to be one way for indies to survive this temporary winter in the VR market.

Image courtesy Jeremykemp at English Wikipedia (CC BY-SA 3.0)

Unger says that headset sales are being used as the primary metric for success, however, as an independent VR studio they are more interested in looking at the active attachment rates when doing their own internal planning for the next couple of years. That is: how many VR consumers are using their VR headsets either every day or at least once a month, versus how many have bought a VR headset, but aren’t using it because the content hasn’t been compelling enough to keep them coming back?

Unger suggests that the active attachment rates are the more important metric, but none of the major platform players in the VR industry want to have a transparent and honest conversation as to how this ecosystem is growing, how to best track and promote growth, and reflecting on whether or not their strategies are working. Unger suggests that there’s a middle-tier of independent VR developers who have not received a lot of support from the major headset manufacturers, and even the mainstream press hasn’t been investing the time or interest in covering some of these non-AAA tier VR game experiences.

So I had a chance to catch up with Unger at GDC where we had a candid conversation about the state of the VR industry, why he thinks VR games may be in the ‘Trough of Disillusionment’ (it could be in the slope of enlightenment for other VR industry verticals), and some of the things that major headset manufacturers and content funders can do to support this middle tier of VR development in order to have a more robust, healthy, and vibrant VR developer ecosystem.


Support Voices of VR

Music: Fatality & Summer Trip

The post A Candid Assessment of the Industry From a Leading Indie VR Developer appeared first on Road to VR.

Exploring Near-Future Moral Quandaries with INVAR Studios

vincent-edwardsRose Colored by INVAR Studios & Adam Cosco won the award for best live-acton VR experience at the Advanced Imaging Society’s Lumiere Awards on February 12th. I previously interviewed Cosco at VRLA last year, and I had a chance to talk with INVAR’s co-founder & chief creative officer Vincent Edwards & creative director Austin Conroy on their thoughts on the future of storytelling in VR at Kaleidoscope VR’s FIRST LOOK VR Market.

LISTEN TO THE VOICES OF VR PODCAST

austin-conroy

Rose Colored is a near-future speculative sci-fi tale in the same cautionary vein as Black Mirror, but with a little bit more of an optimistic bent. Edwards identifies as an inveterate optimist, and enjoys the process of world-building potential futures in VR and exploring the moral quandaries of the logical extremes of how AR & AI technologies will impact our lives and romantic relationships. Conroy identifies as a storytelling geek, and is really interested in VR’s capability to allow you to embody a character using the visual storytelling affordances cultivated by cinema. It’s an open question for how you can get the audience inside of a fictional character’s head, which he compares to building a mind.

Edwards says that VR storytelling reminds him of the early days of the DIY punk rock scene in Los Angeles where there’s a lot of experimentation and a willingness to forget everything you know. There are a lot of lessons about visual storytelling that will come from film, and the interactive storytelling innovations for VR are more likely to come from game developers.

As far as where VR & AR goes in the future, both Edwards and Conroy take inspiration from Buddhist and Hindu concepts. Conroy cites a passage from Eknath Easwaran’s translation of the Dhammapada saying that our experiences could be thought of as projection similar to how we experience continuity of a story when a movie projects 24 frames per second onto a screen. Edwards says that if that’s true, then perhaps VR could provide us with training wheels to be able to cut through the matrix and “awaken from the dream that is Maya.” They acknowledge that these are some dense philosophical and metaphysical ideas, but that it’s part of the deeper motivations for INVAR Studios to create multi-platform stories that help reflect on our identity and experiences in life, and to give us stories about potential futures that help us reconcile with the nature of reality today.


Support Voices of VR

Music: Fatality & Summer Trip

The post Exploring Near-Future Moral Quandaries with INVAR Studios appeared first on Road to VR.

Training AI & Robots in VR with NVIDIA’s Project Holodeck

At SIGGRAPH 2017, NVIDIA was showing off their Isaac Robot that had been trained to play dominos within a virtual world environment of NVIDIA’s Project Holodeck. They’re using Unreal Engine to simulate interactions with people in VR to train a robot how to play dominos. They can use a unified code base of AI algorithms for deep reinforcement learning within VR, and then apply that same code base to drive a physical Baxter robot. This creates a safe context to train and debug the behavior of the robot within a virtual environment, but to also experiment with cultivating interactions with the robot that are friendly, exciting, and entertaining. This will allow humans to build trust in interacting with robots in a virtual environment so that they are more comfortable and familiar with interacting with physical robots in the real world.

LISTEN TO THE VOICES OF VR PODCAST

I talked with NVIDIA’s senior VR designer on this project, Omer Shapira, at the SIGGRAPH conference in August, where we talk about using Unreal Engine and Project Holodeck to train AI, using a variety of AI frameworks that can use VR as a reality simulator, stress testing for edge cases and anomalous behaviors in a safe environment, and how they’re cultivating social awareness and robot behaviors that improve human-computer interactions.

Here’s NVIDIA CEO Jensen Huang talking about using VR to train robots & AI:

If you’re interested in learning more about AI, then be sure to check out the Voices of AI podcast which just released the first five episodes.


Support Voices of VR

Music: Fatality & Summer Trip

The post Training AI & Robots in VR with NVIDIA’s Project Holodeck appeared first on Road to VR.

The Future of Virtual Lightfields with Otoy CEO Jules Urbach

Otoy is a rendering company that pushing the limits of digital light fields and physically-based rendering. Now that Otoy’s Octane Renderer has shipped in Unity, they’re pivoting from focusing on licensing their rendering engine to selling cloud computing resources for rendering light fields and physically-correct photon paths. Otoy has also completed an ICO for their Render Token (RNDR), and will continue to build out a centralized cloud-computing infrastructure to bootstrap a more robust distributed rendering ecosystem driven by a Etherium-based ERC20 cryptocurrency market.

LISTEN TO THE VOICES OF VR PODCAST

jules-urbach-2017I talked with CEO and co-founder Jules Urbach at the beginning of SIGGRAPH 2017 where we talked about relighting light fields, 8D lightfield & reflectance fields, modeling physics interactions in lightfields, optimizing volumetric lightfield capture systems, converting 360 video into volumetric videos for Facebook, and their movement into creating distributed render farms.

In my previous conversations with Urbach, he shared his dreams of rendering the metaverse and beaming the matrix into your eyes. We complete this conversation by diving down the rabbit hole into some of the deeper philosophical motivations that are really driving and inspiring Urbach’s work.

This time Urbach shares his visions of VR’s potential to provide us with experiences that are decoupled from the normal expected levels of entropy and energy transfer for an equivalent meaningful experience. What’s below the Planck’s constant? It’s a philosophical question, but Urbach suspects that there are insights from information theory since Planck’s photons and Shannon’s bits have a common root in thermodynamics.

SEE ALSO
Facebook Unveils Two New Volumetric Video 'Surround360' Cameras, Coming Later this Year

He wonders whether the Halting problem suggests that a simulated universe is not computable, as well as whether Gödel’s Incompleteness Theorems suggests that we’ll never be able to create a complete model of the Universe. Either way, Urbach is deeply committed to trying to creating the technological infrastructure to be able to render the metaverse, and continue to probe for insights into the nature of consciousness and the nature of reality.

Here’s the launch video for the Octane Renderer in Unity:


Support Voices of VR

Music: Fatality & Summer Trip

The post The Future of Virtual Lightfields with Otoy CEO Jules Urbach appeared first on Road to VR.

AR & AI Storytelling Innovations in ‘TendAR’ from Studio Tender Claws

samantha_gormanTender Claws, the creators of the award-winning interactive VR narrative Virtual Virtual Reality, premiered a new, site-specific, interactive AR narrative experience at the Sundance New Frontier called TendAR. It was a social augmented reality experience that paired two people holding a cell phone and sharing two channels of an audio stream featuring an fish guiding the participants through a number of interactions with each other and exploring the surrounding environment. The participants were to instructed to express different emotions in order to “feed” and “train” the AI fish. Google’s ARCore technology was used for the augmented reality overlays, the Google Cloud Vision AI API for object detection, as well as early access to some of Google’s cutting-edge Human Sensing Technology that could detect emotional expressions of the participants.

LISTEN TO THE VOICES OF VR PODCAST

Overall, TendAR was a really fun and dynamic experience that showed how the power of AR storytelling lies in doing interesting collaborative exercises with another person, but also becoming aware of your immediate surroundings, context, and environment where objects can be discovered, detected, and integrated as a part of a interaction that’s happen with a virtual character.

I had a chance to talk with Tender Claws co-founder Samantha Gorman to talk about the studios’ approach to experiential design for an open-ended interactive AR experience, the unique affordances and challenges of augmented reality storytelling, their collaboration with interactive storytelling theater group Piehole, the challenges of using bleeding-edge AI technologies from Google, and some of their future plans in expanding this prototype experience into a full-fledged 3-hour, solo AR experience with a number of excursions and social performative components.

Here’s a brief teaser for TendAR:

Gorman said that they’re not planning on storing or saving any of the emotional recognition data on their side, and this is the first time that I’ve ever heard anything about Google’s Human Sensing group. I trust Tender Claws to be good stewards of my emotional data, and their TendAR experience shows the potential of what type of immersive narrative experiences are possible when integrating emotional detection as an interactive biofeedback mechanic.

Mimicking a wide range of different emotional states can evoke a similarly wide range of different emotional states, and so I found that TendAR provided a really robust emotional journey that was a satisfying phenomenological experience. TendAR was also an emotionally intimate experience to share with a stranger at a conference like Sundance, but it demonstrates the power of where AR storytelling starts to shine: creating contexts for connection and opportunities to create new patterns of meaning in your immediate surroundings.

However, the fact that Google is working on technology that can capture and potentially store emotional data of users introduces some more complicated privacy implications that are worth expanding upon. Google and Facebook are performance-based marketing companies who are driven to capture as much data about everyone in the world as possible, and VR & AR technologies introduce the opportunity to capture much more intimate data about ourselves. Biometric data and profiles of our emotional reactions could reveal unconscious patterns of behavior that could be ripe for abuse, or be used to train AI algorithms that reinforce the worst aspects of our unconscious behaviors, to the benefit of others.

I’ve had previous conversations about privacy in VR with behavioral neuroscientist John Burkhardt who talked about the unknown ethical threshold of capturing biometric data, and how the line between advertising and thought-control starts to get blurred when you’re able to have access to biometric data that can unlock unconscious triggers that drive behavior.

VC investor and privacy advocate Sarah Downey talked about how VR could become the most powerful surveillance technology every invented or it could become one of our last bastions of privacy if we architect systems with privacy in mind (SPOILER ALERT: Most of our current systems are not architected with privacy in mind since they’re capturing and storing as much data about us as possible).

And I also talked with VR privacy philosopher Jim Preston who told me about the problems with the surveillance-based capitalism business models of performance-based marketing companies like Google and Facebook, and how privacy in VR is complicated and that it’s going to take the entire VR community having honest conversations about it in order to figure it out.

Most people get a lot of benefit from these services, and they’re happy to trade their private data for free access to products and services. But VR & AR represent a whole new level of intimacy and level of detail of information that is more similar to medical data that’s protected by HIPAA regulations than it is to data that is consciously provided by the user through a keyboard. It’s been difficult for me to have an in-depth and honest conversation with Google about privacy or with Facebook/Oculus about privacy because the technological roadmap for integrating biometric data streams into VR products or advertising business models have still been in the theoretical future.

But news of Google’s Human Sensing group building products for detecting human emotions shows that these types of products are on the technological roadmap for the near future, and that it’s worth having a more in-depth and honest conversation about what types of data will be capture, what won’t be captured, what will be connected to our personal identity, and whether or not we’ll have options to opt-out of data collection.

Here’s a list of open questions about privacy for virtual reality hardware and software developers that I first laid out in episode #520:

  • What information is being tracked, recorded, and permanently stored from VR technologies?
  • How will Privacy Policies be updated to account for Biometric Data?
  • Do we need to evolve the business models in order to sustain VR content creation in the long-term?
  • If not then what are the tradeoffs of privacy in using the existing ad-based revenue streams that are based upon a system of privatized surveillance that we’ve consented to over time?
  • Should biometric data should be classified as medical information and protected under HIPAA protections?
  • What is a conceptual framework for what data should be private and what should be public?
  • What type of transparency and controls should users expect from companies?
  • Should companies be getting explicit consent for the type of biometric data that they to capture, store, and tie back to our personal identities?
  • If companies are able to diagnose medical conditions from these new biometric indicators, then what is their ethical responsibility of reporting this users?
  • What is the potential for some of anonymized physical data to end up being personally identifiable using machine learning?
  • What controls will be made available for users to opt-out of being tracked?
  • What will be the safeguards in place to prevent the use of eye tracking cameras to personally identify people with biometric retina or iris scans?
  • Are any of our voice conversations are being recorded for social VR interactions?
  • Can VR companies ensure that there any private contexts in virtual reality where we are not being tracked and recorded? Or is recording everything the default?
  • What kind of safeguards can be imposed to limit the tying our virtual actions to our actual identity in order to preserve our Fourth Amendment rights?
  • How are VR application developers going to be educated and held accountable for their responsibilities of the types of sensitive personally identifiable information that could be recorded and stored within their experiences?

The business models of virtual reality and augmented reality have yet to be fully fleshed out, and the new and powerful immersive affordances of these media suggest that new business models may be required that both work well and respect user privacy. Are we willing to continue to mortgage our privacy in exchange to access to free services? Or will new subscription models emerge within the immersive media space where we pay upfront to have access to experiences similar to Netflix, Amazon Prime, or Spotify? There’s a lot more questions than answers right now, but I hope to continue to engage VR companies in a dialogue about these privacy issues throughout 2018 and beyond.


Support Voices of VR

Music: Fatality & Summer Trip

The post AR & AI Storytelling Innovations in ‘TendAR’ from Studio Tender Claws appeared first on Road to VR.

Grabbing Virtual Objects with the HaptX Glove (Formerly AxonVR)

Jake-Rubin
Jake Rubin

The HaptX Glove that was shown at Sundance was one of the most convincing haptics experiences that I’ve had in VR. While it was still primitive, I was able to grab a virtual object in VR, and for the first time have enough haptic feedback to convince my brain that I was actually grabbing something. Their glove uses a combination of exoskeletal force feedback with their patented microfluidic technology, and they’ve significantly reduced the size of their external box driving the experience from the demo that I saw at GDC (back when they were named AxonVR) thanks to a number of technological upgrades and ditching the temperature feedback.

LISTEN TO THE VOICES OF VR PODCAST

joe-michaels
Joe Michaels

I had a chance to talk with CEO & co-founder Jake Rubin and Chief Revenue Officer Joe Michaels at Sundance where we talked about why enterprise & military training customers are really excited about this technology, some of the potential haptics-inspired interactive storytelling possibilities, how they’re refining the haptics resolution fidelity distribution that will provide the optimal experience, and their collaboration with SynTouch’s texture-data models in striving towards creating a haptic display technology that can simulate a wide ranges of textures.

SEE ALSO
Hands-on: HaptX Glove Delivers Impressively Detailed Micro-pneumatic Haptics, Force Feedback

HaptX was using a Vive tracker puck for arm orientation, but they had to develop customized magnetic tracking to get the level of precision required to simulate individual finger movements, and one side effect is that their technology could start to be used as an input device. Some of HaptX’s microfludic technologies combined with a new air valve that is 1000x more precise could also start to create unique haptics technologies that could have some really interesting applications for sensory replacement or sensory substitution or start to be used in assisting data visualizations in a similar way that sound enhances spatialization through a process called sonification.

Photo by Road to VR

Overall, HaptX is making rapid progress and huge leaps with their haptics technologies and they’ve crossed a threshold for becoming useful enough for a number of different enterprise and military training applications. Rubin isn’t convinced that VR haptics will ever be able to fully trick the brain in a way that’s totally indistinguishable from reality, but they’re getting to the point where it’s good enough to start to be used creatively in training and narrative experiences. Perhaps soon we’ll be seeing some of HaptX’s technology in location-based entertainment applications created by storytellers who got to experience their technology at Sundance this year, and I’m really looking forward to seeing how their textures haptic display evolves over the next year.


Support Voices of VR

Music: Fatality & Summer Trip

The post Grabbing Virtual Objects with the HaptX Glove (Formerly AxonVR) appeared first on Road to VR.

360 Film ‘Dinner Party’ is a Symbolic Exploration of Race in America Wrapped in an Alien Abduction Story

laura-wexler
Laura Wexler

Dinner Party is an immersive exploration of Betty and Barney Hill’s widely known 1961 alien abduction story that premiered at the Sundance New Frontier film festival. Rather than using normal alien tropes, writers Laura Wexler & Charlotte Stoudt chose to use the spatial affordances of VR to present a symbolic representation of each of their experiences to highlight how vastly different they were.

LISTEN TO THE VOICES OF VR PODCAST

charlotte-stoudt
Charlotte Stoudt

Betty and Barney were an interracial couple in New Hampshire, and their purported encounter with aliens was a positive peak experience for Betty, but Barney had an opposite experience that Wexler & Stoudt attribute to his experience as a black man in the early 1960s. Inspired by passages of Barney’s hypnosis recordings posted online, Wexler & Stoudt expanded Hill’s story into an immersive narrative at the New Frontier Story Lab, and collaborated with director Angel Manuel Soto to bring this story to life in a 360 film.

Dinner Party is the pilot episode of a larger series called The Incident, which explores the aftermath of how people deal with a variety of paranormal or taboo experiences. Wexler & Stoudt are using these stories to explore themes of truth and belief such as: Who is believed in America? Who isn’t? What’s it feel like to go through an extreme experience that no one believes happened to you? And can immersive media allow you to empathize with someone’s extreme subjective experience without being held back by an objective reality that you believe is impossible?

Dinner Party is great use of immersive storytelling, and it was one of my favorite 360 experiences I saw at Sundance this year. It has a lot of depth and subtext that goes beyond what’s explicitly said, and I thought they were able to really use the affordances of immersive storytelling to explore a phenomenological experience in a symbolic way. It’s a really fascinating exploration of radical empathy using paranormal narrative themes that you might see in the The X-Files or The Twilight Zone, and I look forward to see what other themes are explored in future episodes.

Here’s a teaser for Dinner Party


Support Voices of VR

Music: Fatality & Summer Trip

The post 360 Film ‘Dinner Party’ is a Symbolic Exploration of Race in America Wrapped in an Alien Abduction Story appeared first on Road to VR.