Gartner’s Brian Blau on the State of the VR & AR Industries

Brian-Blau-2017Brian Blau is the vice president of research for personal technologies at Gartner Research where he’s in the business of making predictions about the consumer adoption of virtual reality and augmented reality technologies. I last interviewed Blau in 2015 when he was saying that his predictions were a lot more conservative than other analysts who were predicting more explosive growth for VR, and Blau tells me that his more conservative estimates have more closely matched with reality where he slightly overestimated PC VR market and underestimated how fast the mobile VR HMD market would take off.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Blau at Google I/O on May 17th, 2017 where we talk about the state of the VR & AR industries and what some of the potential catalysts for consumer adoption might be. A big point that Blau makes is that technologies get adopted when people are not explicitly thinking about them, and that there may be more drivers of immersive technologies through other ambient computing innovations.

Gartner formulated the oft-cited ‘hype cycle’. As of July 2017, the firm puts virtual reality on its way up the ‘Slope of Englithenmen’ with two to five years before reading the ‘Plateau of Productivity’, while ‘AR is just headed to the bottom of the ‘Trough of Disillusionment’ with five to ten years to the Plateau.

This interview was conducted a few weeks before Apple announced ARKit on June 5th and then Google ditched the Tango brand and depth-sensor hardware requirement for their phone-based AR on August 29th when they launched ARCore. Then on September 12th, Apple announced front-facing cameras on the iPhone X for companies like Snapchat to do more sophisticated digital avatars, as well as Animojis that provide the ability to embody emojis with recorded voices messages. Apple also announced it’s now possible to make phone calls via the Apple watch + Airpods, and so this is a push towards ambient computing with conversational interfaces, and moving away from solely relying upon screens on phones.

Like Duygu Daniels told me in 2016, Snapchat is an AR company, and it’s possible that they have had more of an influence on driving Apple’s technological roadmap than virtual reality has. The consumer use of services like Snapchat and Animoji may prove to be key drivers of immersive technologies since Apple decided to put a depth sensor camera on the front of the camera rather than on the back. The front-facing camera offers more sophisticated ways to alter your identity through AR filters, which when you can see in the virtual mirror of your phone screen changes the expression of identity through the embodiment of these virtual avatars. You can see how much Apple’s Craig Federighi changed his expression of himself while recording an Animoji during the Apple keynote:

Snapchat’s Spectacle glasses received a lot of grassroots marketing from users who were recording Snaps absent a phone. But will the additional digital avatar, face-painting features of the iPhone X inspire extra demand for consumers to want to pay $999 for these types of feature that are only made available by a front-facing depth camera?

It’s clear that the technological roadmap for mobile computing has now started to include volumetric and immersive sensors. Google made a bet with Tango that adoption would be driven by a depth sensor pointed outward into the world for AR, but it looks like Snapchat could be a key app that popularizes front-facing cameras and the use of augmented and mixed filters that change how you express yourself and connect to your friends.


Support Voices of VR

Music: Fatality & Summer Trip

The post Gartner’s Brian Blau on the State of the VR & AR Industries appeared first on Road to VR.

Future of Invasive Neural Interfaces & Uploading Consciousness with Ramez Naam

Ramez Naam

Ramez Naam is the author of The Nexus Trilogy sci-fi novels, which explores the moral and sociological implications of technology that can directly interface with the brain. He gave the keynote at the Experiential Technology Conference in March exploring the latest research into how these interfaces could change the way that we sleep, learn, eat, find motivation to exercise, create new habits of change, and broadcast and receive technologically-mediated telepathic messages.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with him after his talk where we do a survey of existing technologies, where the invasive technologies are headed, the philosophical and moral implications of directly transferring data into the brains, and whether or not it’ll be possible to download our consciousness onto a computer.

Read Further:


Support Voices of VR

Music: Fatality & Summer Trip

The post Future of Invasive Neural Interfaces & Uploading Consciousness with Ramez Naam appeared first on Road to VR.

Using Abstract VR Art for Neural Entrainment & Brain Research

kevin-mackKevin Mack is an Oscar-winning visual effects artist and abstract artist who creates digital spaces with fluidly moving textures that are awe-inspiring in their ability to create a novel experience unique to VR. In Blortasia you float weightlessly exploring the ins and outs of a series of tunnels that have a consistent topological sculpture, but with an ever-changing shader of patterned frequencies of rainbow colors that cultivate a sort of visual neural entrainment. It aspires to recreate a psychedelically transcendent or transpersonal experience that goes beyond what your verbal mind can easily understand as there’s no content, message, story, game or objective beyond providing an experience that’s only possible in these virtual worlds. It’s this unique balance between seeing an exciting and novel visual experience that’s also simultaneously relaxing and has the power to induce powerful trance states that may have unique healing properties that are being discovered in medical applications for distraction therapy.

LISTEN TO THE VOICES OF VR PODCAST

Mack has a neuroscience background, and so he’s been collaborating with brain surgeons who are experimenting with using his Zen Parade 360 video as a hypoalgesic to decrease sensitivity to painful stimuli, but it also suppresses the normal thought processes of the left brain so that it neuroscientists can map out and discover new properties of our right brains. Preliminary studies are showing that his abstract design approach to distraction therapy applications in VR are actually more effective than other VR apps that were specifically designed for pain management.

Mack describes himself as a psychonaut having experimented with a lot of psychedelic experiences, but he’s also studied meditation, lucid dreaming, and a number of other esoteric and mystical practices. His career has been in the visual effects industry where he won an Academy Award for his work on What Dreams May Come, but with virtual reality he’s finally able to synthesize all of his life experiences and interests where he can allow people to step inside of his immersive VR art experiences that are designed to expand the blueprints of our minds.

He sees that verbal language has allowed humans to evolve our science and technology up to this point, but that it’s also limited us and constrained us to a whole host of verbal neuroses. He hopes that his virtual reality experiences like Blortasia and Zen Parade can help free us from the shackles of our left brains that he sees are inhibiting the deeper parts of our intuition and unconscious levels of awareness. He’s personally had a number of amazing but also traumatizing experiences with psychedelics, and so he’s trying to use virtual reality in order to replicate those transcendent feelings of awe and wonder that come from mystical experiences in a more safe and controlled fashion.

Mack also shares his out-of-this world, retrocausality backstory that includes a substance-free psychedelic experience with a time-traveling artificial consciousness that’s he’s just starting to create now with neural networks embedded within his art. Is it possible that Mack in the process of actually developing a sentient level of artificial consciousness that will evolve to master the structures of space-time to bend the arrow of time? Or was it just the vivid imagination of a four-year old that has provided him with a powerful inspiration for his entire life? Either way, his Blortasia experience has stumbled upon some important design principles stemming the desire to create art that pushes the boundaries of consciousness.


Support Voices of VR

Music: Fatality & Summer Trip

The post Using Abstract VR Art for Neural Entrainment & Brain Research appeared first on Road to VR.

Community Fallout from UploadVR’s Harassment Settlement, and Bearing Witness to Testimony

The sexual harassment lawsuit against UploadVR was reported to be settled via Tech Crunch on September 6th, and a week later the New York Times followed with more details about how Upload had been barely dented. The case was settled without any elaboration about what did or didn’t happen beyond a vague open letter from the founders of UploadVR. This issue has has splintered the VR community into different factions of people who are either actively blacklisting Upload or have written it off as an isolated incident that has resulted in changes and growth.

Executive Editor’s Note: The Voices of VR Podcast is produced independently by host and founder Kent Bye, and syndicated on Road to VR. Though Kent independently produces the podcast, and since UploadVR is a direct competitor to Road to VR, we at Road to VR felt it best to to remove ourselves from decisions involving this episode on ethical grounds. The decision to publish this piece was made by an anonymous board of members of the virtual reality community who collectively voted on the decision.

As the episode was first published directly via the Voices of VR podcast feed and on the Voices of VR website for some time before being syndicated here, we also reached out to both UploadVR and Selena Pinnell, the interviewee in this episode, to offer an opportunity to add a comment to this article. Pinell declined to offer additional comment. UploadVR founder Taylor Freeman issued the following statement:

In response to this recent Voices of VR episode, I want to express that I really respect Selena for the strength it takes to speak about her experiences and appreciate all the work Kent has done for this industry. These conversations are important. I also appreciate that the podcast makes the distinction between Upload’s past behavior and accounts of assault. Given what came out of this podcast and the conversations happening online, it’s clear the community needs more direct communication from us. I am committed to stepping up, learning from my mistakes and working with the industry to address these issues head on. Therefore, I will personally be hosting an open forum discussion at Upload LA on Wednesday, November 8th at 5pm for those interested in attending. I love this industry and community, and I hope that other young companies and founders can learn from our mistakes.

Sincerely,
Taylor Freeman – CEO, Upload

LISTEN TO THE VOICES OF VR PODCAST

Former employee Danny Bittman wrote about his brief time at Upload in a recent Medium post and there were some women who spoke out in a Buzzfeed article in July, but beyond that not many people with first or second-hand knowledge of the lawsuit allegations have made statements on the record. There hasn’t been a lot of people who have been willing to talk about this issue on the record, but this seems to be changing after the latest round of news about the settlement lawsuit that has left segments of the VR community very unsettled.

selena-pinnellOne woman from the VR community who was willing to talk to me about the community fallout from the UploadVR lawsuit was Selena Pinnell, who is the co-founder of Kaleidoscope VR festival and fund. She is also a producer and featured participant within the Testimony VR project. I previously interviewed the director of Testimony VR project about their efforts to use VR to create an immersive context for women and men to share testimony about their experiences of sexual assault so that audiences can bear witness to those direct experiences. Skip Rizzo has said that healing from PTSD involves being able to tell a meaningful narrative about your traumatic experiences while remaining emotionally present, and Testimony VR is trying to create a new form of restorative justice by capturing these stories within VR that viewers can have have an one-on-one level of intimacy while they bear witness. Pinnell talks about how powerful it was to have over 150 co-workers and friends witness her testimony about being a rape survivor within the context of a VR experience.

While VR holds potential for the future of distributing new forms of restorative justice, this issue with Upload feels like it’s a long way from achieving a state of justice and a full accounting of the truth of what happened. Members from the Women in VR communities privately do not feel like justice has currently been served, and Pinnell voices those common concerns as to why she can no longer support Upload as well as why in her assessment the leadership team of Upload never fully accounted for what exactly they did wrong and what they’ve learned.

She also says that it’s hard to trust the leadership after they originally declared that the originally allegations in the lawsuit were “entirely without merit.” Pinnell talks about how crushing it can be to have your testimony of your direct experience be so explicitly denied in this way, especially when it comes to taboo topics like sexual harassment or sexual assault. (Note that the original allegations against Upload were harassment, gender discrimination, hostile work environment, unequal pay, and retaliation, and there weren’t any allegations of sexual assault.) Pinnell emphasizes how important it is to try to listen to women when they are providing testimony about not feeling safe within a work environment, and to try not to go directly towards demanding objective proof from a frame of skeptical disbelief. Learning how to listen, empathize, and reflect the truth of a direct experience is a skillset that is needed here, and it’s something that the unique affordances of the virtual reality community can help to cultivate through projects like Testimony VR. But there’s many more unresolved issues and open questions that Pinnell and I discuss in deep dive into new models of restorative justice and the community fallout surrounding the Upload lawsuit settlement.


This is a listener-supported podcast, consider making a donation to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

The post Community Fallout from UploadVR’s Harassment Settlement, and Bearing Witness to Testimony appeared first on Road to VR.

Using AR to Recontextualize Our Relationship to Reality with Cabbibo’s ‘ARQUA!’

Isaac “Cabibbo” Cohen

ARQUA! was one of the ARKit launch applications that was designed by VR veteran Isaac “Cabibbo” Cohen, and it has the same indie charm and shader art aesthetic as his previous VR experiences of Blarp! (2016) and L U N E (2016). ARQUA’s gameplay involves you creating a rainbow aquarium by playing with kelp plants, schools of fish, and 3D rods that you place around your space by turning your body into the controller. Cabibbo is really interested in providing users of his AR experience with an experience of agency, creation, and beauty in a way that recontextualizes their relationship to their surrounding environment.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Cabibbo after a presentation about Art in AR/VR in Portland, OR, where we talked about ARKit, exploring what makes a compelling AR experience, lessons that VR has to teach AR, and how data is the ‘R’ in MR/AR/VR/XR in that it’s the transformation of real objects into data that allows us to have mediated experiences within a symbolic reality.


Support Voices of VR

Music: Fatality & Summer Trip

The post Using AR to Recontextualize Our Relationship to Reality with Cabbibo’s ‘ARQUA!’ appeared first on Road to VR.

Advancing Immersive Computing with Intel’s Virtual Reality Center for Excellence

kim-pallisterIntel is investing in the future of immersive computing through their Virtual Reality Center for Excellence. They’re pushing the boundaries of high-end of VR gaming experiences, pursuing initiatives to help VR reach critical mass, and exploring how RealSense depth sensor cameras and WiGig wireless technologies fit into the VR ecosystem. I was able to demo an early prototype demo of an HTC Vive game rendered on a PC and transferred wirelessly to a mobile headset, and it’s part of a research project to search for additional market opportunities for how high-end PCs could drive immersive experiences.

LISTEN TO THE VOICES OF VR PODCAST

I was able to sit down with the Kim Pallister, the director of Intel’s VR Center for Excellence to talk about their various initiatives to advance immersive computing, their WiGig wireless technology, RealSense and Project Alloy, and some of the experiential differences between their lower-end and higher-end CPUs.

SEE ALSO
Promising Intel Proof of Concept Shows SteamVR Game Streaming to Smartphone Over WiFi

Pallister predicts that immersive gaming markets may mirror differences in mobile, console, and PC markets, and that there will be a spectrum of experiences that have tradeoffs between price, performance, and power consumption. Intel is initially focusing on pushing the high-end of VR gaming experiences, but they believe in the future of immersive computing and are looking at how to support and are looking at how to support the full spectrum of virtual reality experiences.


Support Voices of VR

Music: Fatality & Summer Trip

The post Advancing Immersive Computing with Intel’s Virtual Reality Center for Excellence appeared first on Road to VR.

Behind the Scenes of Felix & Paul’s Emmy-winning White House VR Documentary

paul-raphaelThe White House VR documentary People’s House by Felix & Paul Studios won a Emmy for the outstanding original interactive, and I had a chance to talk with Paul Raphael of Felix & Paul Studios about the challenges of producing a high-profile piece.

LISTEN TO THE VOICES OF VR PODCAST

Initially they didn’t know how many rooms they’d be able to shoot; President Obama turned out to be such a fan of the project that he literally opened doors for the crew to record more than twice the number of originally scheduled rooms. They were limited to only two 15 minutes interviews with Barack and Michelle Obama, and so they collaborated with speech writers to capture memories and stories for this virtual guided tour.

Image courtesy Felix & Paul Studios

Felix & Paul Studios create their own VR camera hardware, and they’re starting to use their fourth generation cameras while designing a next-generation, digital lightfield camera. Raphael said lightfield VR shoots are essentially visual effects shoots, which require shooting in different wedge segments that need to be composited in post-production.

SEE ALSO
At 40 Minutes, 'Miyubi' is Felix & Paul's Longest VR Film Yet – Available Today

He also said that they’ve been consulting with most of the major HMD manufacturers including Facebook on an open standard for immersive 3D audio. Even though they’ve been creating a lot of hardware, they’re more interested in using it to stay on the bleeding edge so that they can continue to innovate and push the creative limits of what’s possible in immersive storytelling.


Support Voices of VR

Music: Fatality & Summer Trip

The post Behind the Scenes of Felix & Paul’s Emmy-winning White House VR Documentary appeared first on Road to VR.

Is Body Image from Perception or Attitude? – Studying Anorexia with VR Self-avatars

betty-mohlerDo patients with anorexia nervosa suffer from body image distortion due to how they perceive their body or is it due to attitudinal beliefs? Betty Mohler has been using VR technologies to study whether body representation is more perceptual or conceptual.

LISTEN TO THE VOICES OF VR PODCAST

Mohler captured a 3D body scan of patients, and then used algorithms to alter the body mass index of a virtual self-avatar from a range of +/- 20%. Patients then estimated their existing and desired body using a virtual mirror screen which tracked movements in real-time and showed realistic weight manipulations of photo-realistic virtual avatars. Mohler’s results challenge the existing assumption that patients with anorexia nervosa have visual distortions of their body, and that it’s possible that body image distortion is more driven by attitudinal factors where patients consider underweight bodies as more desirable and attractive.

Mohler works at the Space & Body Perception Group at the Max Planck Institute for Biological Cybernetics. She’s collaborates with philosopher of neuroscience Dr. Hong Yu Wong to research foundational questions about self-perception like: Who am I? Where am I? Where is the origin of my self? Where is the frame of reference? What is the essence of me? How do we know that there’s an external world? What does it mean to have a shared self where multiple people share the same body experience? What does it mean to have a body? How big is my body? Is it possible to be at multiple locations at once while in VR?

I interviewed Mohler for the third at the IEEE VR conference in Los Angeles this past March exploring all of these provocative questions (see my previous interviews on the uncanny valley and avatar stylization).


Support Voices of VR

Music: Fatality & Summer Trip

The post Is Body Image from Perception or Attitude? – Studying Anorexia with VR Self-avatars appeared first on Road to VR.

Tribeca Storyscapes Award Winner on Using VR to Connect with the Beauty of Nature

barnaby-steelMarshmallow Laser Feast is a collective of artists who are interested in using VR technologies to capture the aesthetic beauty of nature, and provide immersive experiences that inspire people to cultivate an even deeper with the reality that surrounds them.

LISTEN TO THE VOICES OF VR PODCAST

Marshmallow Laser Feast’s Treehugger experience provides an immersive telling of the lifecycle of water in trees as rain makes its way up from the roots of a Sequoia tree and is released as oxygen in a highly-stylized & beautiful point-cloud aesthetic. Their experience included smells and passive haptic feedback to make their simulated volumetric time-lapse even more immersive, and it actually won the Storyscapes award at the Tribeca Film Festival. I caught up with co-founder Barnaby Steel to talk about how VR could be used to inspire us to cultivate an even deeper relationship with the world around us.

Here’s a clip of Treehugger:


Support Voices of VR

Music: Fatality & Summer Trip

The post Tribeca Storyscapes Award Winner on Using VR to Connect with the Beauty of Nature appeared first on Road to VR.

Blending Immersive Theater & VR with ‘Draw Me Close’ by NFB and National Theatre

johanna-nicollsThe National Theatre has created an Immersive Storytelling Studio to better understand the practices, protocols, opportunities of how virtual and augmented reality technologies are creating new storytelling possibilities. They collaborated with the National Film Board of Canada on an immersive theater piece called Draw Me Close that premiered at Tribeca Film Festival. It featured a one-on-one interaction with a live actor in a mixed reality environment while the audience was unveiled within a virtual reality headset where you play the archetypal role of a son/daughter as your mother embraces you, draws with you, and tucks you into bed as she narratives a memoir of her life. I talked with Immersion Storytelling Studio producer Johanna Nicolls about the reactions, intention, and overall development of Draw Me Close, which is their first immersive theater VR piece.

LISTEN TO THE VOICES OF VR

The spatial storytelling techniques and skills that theater has been developing for hundreds of years translates really well to the even more immersive 360-degree, VR environments. But with live experiences like Sleep No More and Then She Fell, there’s also a whole other ‘immersive theater’ movement within the theater world that is bringing new levels of embodiment, choices, and agency into authored theater performances.

No Proscenium podcast host Noah Nelson wrote up a great introductory primer of immersive theater that explores the nuanced differences between immersive theater, site-specific performances, and environmentally-staged theater. One differentiation that Nelson makes is that immersive theater has much more of an explicit experiential design that “feels more like an event you experienced than a performance that you witnessed.”

The version of Draw Me Close that I saw at Tribeca took a powerful first step in exploring how live actors sharing the same physical space within a mixed context provides a new dimension of emotional and embodied presence. The haptic feedback of an embodied hug from a co-present human is something that may never be able to ever be fully simulated in VR, and so this illustrates a clear threshold to me of what can and cannot currently be done in VR.

I also saw the Then She Fell immersive theater piece which featured a lot of one-on-one interactions with performers, and so I think that there’s a profound depth of emotional presence and intimacy that you can achieve with another person without the barriers of technology. You still can’t see the more subtle microexpressions of emotion or perceive the more nuanced body language cues when interacting with other humans while you’re in VR, but feeling the actor touch me provided a deeper phenomenological sense of embodied essence that I was interacting with an actual human in real-time. Directly interacting with another physically co-located person and feeling their touch closed some perceptual gaps and took my sense of social presence beyond the normal levels I have in distributed social VR experiences.

This was also such a new type of experience that I didn’t know the rules of engagement for how much I was expected to speak or interact. There weren’t a lot of prompts for talking or engaging, and so I mostly silently received the story as each moment’s actions were being actively being discussed, analyzed, and contextualized by a steady stream of real-time narration. There were not a lot of prompts, invitations, or space made available for dialogue, but there were a number of interactive actions I was invited to do ranging from opening a window to drawing Tilt Brush-style on the floor. There was a deliberate decision to be fairly vague in casting a magic circle of the rules and boundaries of what to expect, since the story, characters, and loving embrace of a motherly hug was all designed to be a surprise. This shows the challenging issues of balancing how to receive explicit consent to being touched while also maintaining the integrity of the mystery of a story that’s about to unfold.

Draw Me Close is an ambitious experiment to push the storytelling possibilities that are made available within a one-on-one interaction of an immersive theater piece while the audience is within virtual reality. It was a profound enough experience for a number of people who needed to have some level of decompression and help transitioning back from exploring some of the deeper issues that were brought within the experience.

There are obviously limitations for how this type of experience could be scaled up so that it was logistically feasible to be shown on a wider scale, but it’s refreshing to see the NFB and National Theatre’s Immersive Storytelling Studio experiment, explore, and push the limit for what’s even possible. If too much effort is focused on what’s sustainable or financially viable, then it could hold back deeper discoveries about the unique affordances of combining immersive theater with immersive technologies.

Explore Further:


Support Voices of VR

Music: Fatality & Summer Trip

The post Blending Immersive Theater & VR with ‘Draw Me Close’ by NFB and National Theatre appeared first on Road to VR.