Danfung Dennis of Condition One has an ambitious vision for the potential of virtual reality, and it’s one of the most radical ones that I’ve come across. He believes that VR can be used as a tool to cultivate compassion through having an embodied experience of witnessing suffering within VR. He says that the process of witnessing suffering can be used as a type of advanced Buddhist mind training to focus your attention, contemplate on your visceral reactions, and grow compassion through taking action. These brief VR experiences have the potential to impact day-to-day consumer decisions that people make, which can taken collectively could radically change the world.
LISTEN TO THE VOICES OF VR PODCAST
I know that this is possible is because I had one of the most powerful reactions I’ve ever had from watching Condition One’s Fierce Compassion / Operation Aspen VR experience. This live-action, cinéma vérité VR experience shows animal rights activists breaking into a factory farm to perform an open rescue and document the horrendous living conditions of Chicken in cages. It’s a guided tour of the many untreated heath ailments and barbaric conditions that are common in these types of industrial-scale factory farms. Having a direct embodied experience and bearing witness to this suffering had such a powerful impact on me that I vowed to never purchase anything other than cage-free Chicken eggs.
Condition One has also been producing guided meditations that are designed to be watched after experiencing some of their other animal rights experiences. Factory Farm is the most graphic and intense experience I’ve ever had in VR in that it shows the slaughter of two pigs as they go through a factory farm in Mexico. After witnessing this horrific scene in VR, I can why Paul McCartney once said, “If slaughterhouses had glass walls, everyone would be vegetarian.”
Condition One has also been tackling larger issues like global warming in VR> They produced the Melting Ice companion VR piece to An Inconvenient Sequel, which is a follow-up film to Al Gore’s An Inconvenient Truth. The An Inconvenient Sequel film lays out all of the latest science as told through the personal narrative of Al Gore, and the VR experience doesn’t attempt to delve into that much depth of the science. Dennis pulled back a lot of the narrative and story elements and just focused on trying to create an embodied experience of transporting you to locations of melting ice as large chunks fall off the side of cliffs, the cracking sound of steady dripping, but also entire rivers of melting glaciers cutting through sheets of ice.
One of the challenges with complex topics like global warming is that it’s very difficult to provide a singular embodied experience in VR that tells the entire story of the systemic causes of global warming. Standing on melting ice that’s disappearing at an accelerated pace due to global warming is as good of a experience as any, but it’s still difficult to tell that entire story within the confines of VR. So rather than convey the science of it all, Dennis decided to take a more contemplative and Zen approach of creating an sparse experience with limited narration in order to cultivate a direct experience with the sounds and visuals of a rapidly changing part of the planet.
Dennis believes that VR has the potential to be tool that can inspire humans to cultivate compassion by taking actions that relieve suffering. He’s interested in creating VR experiences that allow us to witness the suffering in the world, and that ultimately help us to expand our sphere of compassion beyond just our immediate friends, family, and pets to eventually include all sentient beings and the planet earth. These embodied virtual reality experiences stick with us in a deeper way, and become a part of our memories as we are making decisions of either continuing to participate in a system of violence or choosing more sustainable and ethical options that cultivate compassion and takes into consideration the impact on the next seven generations.
Tilt Brush is Google’s first VR app to launch on the Oculus Rift, and I had a chance to catch up with Tilt Brush product manager Elisabeth Morant. We have a broad discussion about adapting Tilt Brush for the Touch controllers, the Tilt Brush Artist in Residency Program, the Tiltbrush Unity Toolkit, and some of the features coming in the future potentially including a layering system and more non-intuitive and unexpected features similar to audio reactive brushes. I also asked about privacy in VR, but Google has yet to disclose any information about what information they may or may not be capturing from VR users.
LISTEN TO THE VOICES OF VR PODCAST
Some of the most newsworthy parts about my interview with Morant were regarding things that weren’t talked about. When asked to comment about this being the first VR collaboration between Facebook & Google, Morant said that Google is “really looking to push virtual reality as a platform.” There’s been a tense history between Google and Facebook, and releasing Tilt Brush via Oculus Home is the first collaboration in the VR space that we’ve seen from the two tech giants.
This also means that it’s the first Google app that’s being released within the context of Oculus’ Privacy policy, which states that physical movements can be recorded and tied back to your Facebook profile. Facebook will be able to capture and store physical movements of users who are using Google’s application, and then this data could be connected to a unified Facebook super profile that pulls in data from third parties. Up to this point Google hasn’t made any VR-specific updates to their Privacy policy that explicitly accounts for what may or may not be recorded in VR and then connected back to your Google profile.
I asked Morant about this overlapping privacy policy dynamic between Google and Facebook during my interview, and Google’s PR liaison said that we could follow up after the interview for more information. I did follow up after the interview, and Google is indeed looking at the possibility of updating their privacy policy by saying “it is something that we are looking at, but nothing to share at this time.”
But Google dodged answering about what they may or may not already be recording in VR, again. I previously asked a follow-up question about what data they’re capturing in my previous interview about Google Earth VR, but I received a generic boilerplate answer. When I asked again, they basically sent back the same non-answer.
Non-answers to hard to write about and cover, and so they usually serve the purpose of not talking about it. But it also reinforces the impression that privacy in VR is the big elephant in the room that no one wants to really talk about. So I maintained the integrity of my original questions within the context of the podcast interview, and I’ve also included the full context of my follow-up exchange with Google PR below.
Me:
I just had a follow-up question about privacy with some reference material. I’d love to get some more specific answers from a privacy expert on your side, and swap that more detailed information to put at the end within my wrap-up. If there’s someone there who I could speak to directly, then that would be preferable. A written response also works, but not quite as well within the podcast medium because I end up having to speak words on your behalf.
At this moment, Google’s Privacy policy does not have any language that is specific for any virtual reality technologies, and there are no controls for VR data that might be recorded listed within the “My Account” Privacy dashboard.
My question: Is any physical movement data of either the head or hands from in any VR experiences being recorded and saved by Google?
Oculus’ Privacy Policy states that “Information about your physical movements and dimensions when you use a virtual reality headset” are being captured and stored as part of the “Information Automatically Collected About You When You Use Our Services.”
In my previous interview about Google Earth VR, I followed up with some questions about privacy and you sent back a prepared statement that I included within both my written and spoken write-up. Here’s that passage:
Google Earth VR is a free application for the Vive on Steam VR, and so I had a couple of follow up questions for Google after my interview. I asked them: “What kind of data can and cannot be collected given Google’s standard Privacy Policy within a VR experience?” and “Are there long-term plans to evolve Google’s Privacy Policy given how VR represents the ability to passively capture more and more intimate biometric data & behavioral data?”
Google:
Our users trust us with their information and we outline how it may be used across Google — to personalize experiences, to improve products, and more — in our Privacy policy. Users can control the information they share with Google in ‘My Account’.”
Me:
Google’s previous response didn’t actually really directly answer my question. Google’s Privacy policy does not have any language that is specific for any virtual reality technologies, and there are no controls for VR data that might be recorded listed within the My Account Privacy dashboard.
• Does this mean that no virtual reality specific data is being recorded or captured from Google?
• Or if there is data being collected from VR, will we see an update to Google’s Privacy Policy that discloses what is being recorded?
Thanks for willing to take a look at this, and I look forward to getting some more specific answers than Elisabeth was able to provide.
Google:
We don’t have a privacy expert available for you to speak to for the podcast. In regards to your question about an updated privacy policy – it is something that we are looking at, but nothing to share at this time. As soon as we have any updates, we’ll let you know. The statement we provided before still applies:
“Our users trust us with their information and we outline how it may be used across Google — to personalize experiences, to improve products, and more — in our Privacy policy. Users can control the information they share with Google in ‘My Account’.”
Google is looking to potentially update their privacy policy with more information about what is or isn’t recorded, but up to this point they haven’t disclosed any information about what they’re capturing. There’s been no updates to the Privacy policy to account for any new VR technologies, and there’s no VR data available through the ‘My Account’ tab on your Google account.
I’ve asked Google twice now what data they’re recording, and both times they’ve avoided answering with a direct answer. Privacy in VR is a hard topic to cover, especially when the major players don’t really want to talk about it. I wrote extensively in this article about the privacy implications of VR and how VR has the potential to become on of the most powerful surveillance technologies or the last bastion of privacy depending on the types of user demands are placed upon the systems that are built. Sarah Downey argues against companies capturing too much data and storing it forever, and so it’s important for companies to have transparency about what they’re doing.
Google appears to be failing on the privacy transparency front by avoiding answering simple questions. What data are you recording in VR? Is it being tied back to personally identifiable information? And if so, then when can we see updates to the privacy policy to reflect that? These seemingly simple policies will one day be very important if VR takes off like the industry hopes and expects.
Alvin Wang Graylin is the China President of Vive at HTC, and I had a chance to talk with him at CES this year about what’s happening in China. He provided me with a lot of cultural context, which includes support from the highest levels of Chinese Government to invest in companies working on emerging technologies like virtual reality and artificial intelligence. There were a flood of Chinese companies at CES showing VR headsets, peripherals, and 360 cameras. On average, the VR hardware from China tends to be no where near the quality of the major VR players of the HTC Vive, Oculus Rift, Sony PSVR, or Samsung GearVR, but there were some standout Chinese companies who are leading innovation in specific area. For example, some highlights from CES include TPCast’s wireless VR, Noitom’s hand-tracked gloves, and Insta360 with some of the cheapest 360 cameras with the best specs available right now.
After CES, I was convinced that if you want to understand what’s going to be happening in the overall VR ecosystem, then it’s worth looking to see what’s happening in China. The VR market in China is growing, and there is a lot more optimism for technological adoption and enthusiasm for having VR arcade experiences. Education in China is also very important with the one-child/two-child policy, and Graylin says that if VR can be proven to have a lot of educational impacts then the government will act to get VR headsets in every classroom. Once VR is in the classrooms, then it’ll help convince more parents to buy one for the home if they believe it’ll help their education.
LISTEN TO THE VOICES OF VR PODCAST
In an extensive round-up of Chinese-driven VR growth from Yoni Dayan, he mentions a moonshot project called Donghu VR Town, which is a proposed “city built in the south of the country, designed with virtual reality intertwined in every aspects from services, healthcare, education, to entertainment.” Here’s an untranslated promotional video that shows off what a VR-utopian city might look like:
China is a complicated topic and ecosystem, but after having a direct experience of the TPCast wireless VR, Noitom VR gloves, and the great-looking and high-res stereoscopy from a Insta360 camera at CES, then I think that it’s time to really look to China as a leader in innovation. If China really does go all-in on VR and AI and continues to investing large sums of money, then that type of institutional support is going to leap-frog China as one of the leading innovators in the world. I’ve already have started to see this at CES this year and at the International Joint Conference on Artificial Intelligence where there was a very healthy representation from China, and the thing to watch over the next couple of years is any big educational infrastucture investments by the Chinese government as well as the evolving digital out-of-home entertainment hardware ecosystem.
At Unity’s Unite keynote in November, Otoy’s Jules Urbach announced that their Octane Renderer was going to be built into Unity to bake light field scenes. But this is also setting up the potential for real-time ray tracing of light fields using application-specific integrated circuits from PowerVR, which Urbach says that with 120W could render out up to 6 billion rays per second. Combining this PowerVR ASIC with foveated rendering and Otoy’s Octane renderer built into Unity provides a technological roadmap for being able to produce a photorealistic quality that will be like beaming the Matrix into your eyes.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to catch up with Urbach at CES 2017 where we talked about the Unity integration, the open standards work Otoy is working on, overcoming the Uncanny Valley, the future of the decentralized metaverse, and some of the deeper philosophical thoughts about the Metaverse that is the driving motivation behind Otoy’s work toward being able to render virtual reality with a visual fidelity that is indistinguishable from reality.
Here’s Otoy’s Unity Integration Announcement:
Here’s the opening title sequence from Westworld that uses Otoy’s Octane Renderer:
Within premiered their first real-time rendered, interactive experience at Sundance New Frontier this year with Life of Us, which is the story of life on the planet as told through embodying a series of characters who are evolving into humans. The experience is somewhere between a film and game, and ends up feeling much like a theme park ride. There’s an on-rails narrative story being told, but there’s also opportunities to throw objects, swim or fly around, control a fire-breathing dragon, and interact with another person who has joined you on the experience. You learn about which new character you’re embodying by watching the other person embody that creature with you, and the modulation of your voice also changes with each new character deepening your sense of embodiment and presence.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to catch up with Within CTO and co-founder Aaron Koblin at Sundance to talk about their design process, overcoming the uncanny valley of voice modulation delays, how the environment is a primary feature of VR experiences, and how their background in large-scale museum installations inspires their work in virtual reality.
Koblin also talks quite a bit about finding that balance between the storytelling of a film and interaction of a game, and how Life of Us is their first serious investigation into that hybrid form that VR provides. He compares this type of VR storytelling to the experience of going to a baseball game with a friend in that this type of sports experience is amplified by the shared stories that are told by your friends. This is similar to collaborative storytelling of group explorations of VRChat, but with an environment that is a lot more opinionated in how it tells a story.
Life of Us is a compelling way to connect and get to know someone. The structure of the story is open enough to allow each individual to explore and express themselves, but it also gives a more satisfying narrative arc than a completely open world that can have a fractured story. Life of Us has a deeper message about our relationship to each other and the environment that it’s asking us to contemplate. Overall, Koblin says that our relationships with each other essentially amount to the sum total of our shared experiences, and so Within sees an opportunity to create the types of social & narrative-driven, embodied stories that we can go through to connect and express our humanity to each other.
Here’s a trailer for Life of Us:
The Life of Us experience should be released sometime in 2017, and you can find more information about Within website (which links to all of their platform-specific apps), or their newly launched WebVR portal at VR.With.in.
I caught up with Human Interact founder and creative director Alexander Mejia six months ago to talk about the early stages of creating an interactive narrative using a cloud-based and machine learning powered natural language processing engine. We talk about the mechanics of using conversational interfaces as a gameplay element, accounting for gender, racial, and regional dialects, the funneling structure of accumulating a series of smaller decisions into larger fork in the story, the dynamics between multiple morally ambiguous characters, and the role of a character artist who sets bounds of AI and their personality, core belief system, a complex set of motivations.
Here’s a Trailer for Starship Commander:
Here’s Human Interact’s Developer Story as Told by Microsoft Research:
Last year, Baobab Studios’ Eric Darnell was skeptical about adding interactivity to virtual reality stories because he felt like there was a tradeoff between empathy and interactivity. But after watching people experience their first VR short Invasion!, he saw that people were much more engaged with the story and wanted to get more involved. He came to that realization that it is possible to combine empathy and interactivity in the form of compassion acts, and so he started to construct Baobab’s next VR experience Asteroids! around the idea of allowing the user to participate in an act of compassion. I had a chance to catch up with Darnell at Sundance where we talked about his latest thoughts about storytelling in VR, and explored his insights from their first explorations of what he calls “emotional branching.”
LISTEN TO THE VOICES OF VR PODCAST
Darnell says that one of the key ingredients of a story is “character being revealed by the choices that they make under pressure.” Rather than make you the central protagonist as a video game might, in Asteroids! you’re more of a sidekick who can choose whether or not to help out the main characters. This allows an authored story to be told though the main characters that are ultimately independent of your actions, but your “local agency” choices still flavor your experience in the sense that there are different “emotional branches” of the story for how the main protagonists react to you based upon your decisions.
Unpacking the nuances of these emotional branches showed me that Asteroids! was doing some of the most interesting explorations of interactive narrative at Sundance this year, and I would’ve completely missed them had I not had this conversation with him. We explore some of the more subtle nuances of the story, and so I’d recommend holding off on this interview if you don’t want to get too many spoilers (it should be released sometime in the first half of 2017). But Darnell is a master storyteller, and he’s got a lot of really fascinating thoughts about how stories might work in VR that are worth sharing out to the storytellers in the wider VR community.
They’re also doing some interesting experiments of adding in body language mirroring behaviors into the other sidekick characters that are based upon social science research in order to create subtle cues of connecting to the characters and story. There is another dog-like robot the experience that is in the same sidekick class as you where you can play fetch with it and interact with in subtle ways.
Storytelling is a time-based art form that has a physical impact of releasing chemicals in our bodies including cortisol at moments of dramatic tension, oxytocin with character interactions, and dopamine at the resolution of that dramatic tension. Given these chemical reactions, Darnell believes that the classic three-act structure of a story is something that is encoded within our DNA. Storytelling is something that has helped humans evolve, and it’s part of what makes us human. He cites Kenneth Burke saying that “stories are equipment for living.” Stories help us learn about the world by watching other people making choices under pressure.
There’s still a long ways to go before we achieve the Holy Grail of completely plausible interactive stories that provide full global agency while preserving the integrity of a good dramatic arc. It’s likely that artificial intelligence will eventually have a much larger role in accomplishing this, but Asteroids! is making some small and important steps with Darnell’s sidekick insights and “emotional branching” concept. It was one of the more significant interactive narrative experiments at Sundance this year, and showed that it’s possible to combine empathy and interactivity to make a compassionate story.
There are a number of immersive storytelling innovations Sundance 2017 in a number of experiences including Dear Angelica, Zero Day VR, Miyubi, and Life of Us, but Mindshow VR’s collaborative storytelling platform was the most significant long-term contribution to the future of storytelling in VR. I first saw Mindshow at it’s public launch at VRLA, and it’s still a really compelling experience to record myself playing multiple characters within a virtual space. It starts to leverage some of virtual reality’s unique affordances when it comes to adding a more spatial and embodied dimension to collaboratively telling stories.
I had a chance to catch up with Visionary VR’s CEO Gil Baron and Chief Creative Officer Jonnie Ross where we talk about how Mindshow is unlocking collaborative creative expression that allows you to explore a shared imagination space within their platform. We talk about character embodiment, and the magic of watching recordings of yourself within VR, how they’re working towards enabling more multiplayer and real-time improv interactions, and they announced at Sundance that they’re launching Mindshow as a closed alpha.
LISTEN TO THE VOICES OF VR PODCAST
This is also episode #500 of the Voices of VR podcast, and Jonnie and Gil turn the tables on me for what I think the ultimate potential of VR is. My full answer to this question that I’ve asked over 500 people will be fully covered in my forthcoming book The Ultimate Potential of VR. But briefly, I think that VR has the power to connect us more to ourselves, to other people, and to the larger cosmos. Mindshow VR is starting to live into that potential today of providing a way to expressing your inner life through the embodiment of virtual characters that you can then witness, reflect upon, and share with others, and Google Earth VR shows power of using VR to connect more to the earth as well as the wider cosmos.
If you’d like to help celebrate The Voices of VR podcast’s 500th episode, then I’d invite you to leave a review on iTunes to help spread the word, and become a donor to my Voices of VR Patreon to help support this type of independent journalism. Thanks for listening!
Gabo Arora founded the United Nations VR, and has directed some of the more well-known VR empathy experiences starting with Clouds Over Sidra in December 2014 in collaboration with Chris Milk’s VR production house Within. Milk first showed Clouds Over Sidra during Sundance 2015, and featured it prominently in his VR as the Ultimate Empathy Machine TED talk in March 2015, which popularized VR’s unique abilities for cultivating empathy.
I had a chance to catch up with Arora at Oculus’ VR for Good premiere party at Sundance where we talked about producing Clouds Over Sidra, his new Lightshed production company, and the importance of storytelling in creating VR empathy experiences.
LISTEN TO THE VOICES OF VR PODCAST
Arora’s work has been at cross section of storytelling and technology, and diplomacy and humanitarian efforts. He studied film in college, but was unable to launch a successful film career in Hollywood, and instead turned towards humanitarian work with NGOs after 9/11 and eventually with the United Nations in 2009. He used his creative sensibilities to move beyond written text reports, and look to the power of new media to tell humanitarian stories. He had some success with collaborating with social media sensation Humans of New York photographer Brandon Stanton by coordinating a 50-day global trip with in 2014 in order to raise awareness of millennium development goals. He proved the power of using emerging technology to promote humanitarian goals.
After he was introduced to Within’s Chris Milk in 2014, he gathered enough support to create a virtual reality lab at UN staring with creating an experience about the Syrian refugee crisis. Clouds Over Sidra was shot in two days in December 2014 at the Za’atari Refugee Camp, which had over 80,000 Syrian refugees. Arora wanted to focus on a day in the life of a 12-year old refugee, and collaborated with his UN contacts to find the young female protagonist named Sidra. Arora said that a big key to cultivating empathy in virtual reality is to focus on the common ordinary aspects of day-to-day living whether that’s eating a meal or preparing for school. While some of these scenes would seem like non-sequiturs in a 2D film, the sense of presence that’s cultivated in VR gives the feeling of being transported into their world and a feeling of being more connected to the place and story.
Arora acknowledges that merely showing suffering of others can have the opposite effect of cultivating empathy. He cites Susan Sontag’s Regarding the Pain of Others as a book that helped provide some guidelines for how to represent the pain of others. He’s aware that we can have a lustful relationship towards violence, and that there are risks of normalizing suffering can create an overwhelming sensory overload. He’s addresses some of Paul Bloom’s arguments in Against Empathy in that there’s a bias towards empathizing with people who look or act like you. If there’s too much of a difference, then it can be difficult to connect through on any common ground. This is a big reason why Arora has typically focused on finding ways of representing the moments of common humanity within the larger context of fleeing from war or coping with a spreading disease like Ebola.
Arora was able to show that Clouds Over Sidra was able to help the United Nations beat their projected fundraising goal of $2.3 billion dollars by raising over $3.8 billion, but he’s much more confident in showing the UNICEF’s numbers of being able to double face-to-face donations from 1 in 12 without VR to 1 in 6 with VR with an increase of 10% per donation. With these types of numbers, there’s been a bit of a gold rush for NGOs to start making VR experiences for a wide range of causes, but Arora cautions that not all have been successful because not all of them have had an emphasis on good storytelling or the technical expertise that he’s enjoyed with his collaborations with Within.
Hamlet on the Holodeck author Janet H. Murray recently echoed the importance of good storytelling in VR experiences by saying that “empathy in great literature or journalism comes from well-chosen and highly specific stories, insightful interpretation, and strong compositional skills within a mature medium of communication. A VR headset is not a mature medium — it is only a platform, and an unstable and uncomfortable one at that.” The storytelling conventions of VR are still emerging, and the early VR empathy pieces have been largely relying upon conventions of traditional filmmaking.
Arora admits that there’s a certain formulaic structure that most of these early VR empathy pieces have taken that rely upon voice over narration, but he says that he started to dial back the voice overs in his most recent piece The Ground Beneath Her. He says that his recent collaboration with Milk on the U2 Song for Someone music video showed him that there’s a lot that can be communicated without resorting to voice overs.
Murray argues that “VR is not a film to be watched but a virtual space to be visited and navigated through,” and she actually recommends “no voice-overs, no text overlays, no background music.” I’ve independently come to the same conclusion, and generally agree with this sentiment because most voice over narrations or translations feel scripted and stilted. They are also often recorded within a studio that doesn’t match the direct and reflected sounds of the physical locations that are shown, which creates a fidelity mismatch that can break presence and prevent me from feeling completely immersed within the soundscapes of another place.
I’ve found that the cinéma vérité approach of having authentic dialog spoken directly within a scene works really well, or that it works best if the audio is directing me to pay attention to specific aspects to the physical locations that are being shown. After watching all ten of the Oculus for Good pieces at Sundance, one of the most common things that I saw is not having the physical location match whatever is being talked about. Sometimes they’re interesting locations to look at, but it ends up putting the majority of storytelling responsibility within the audio. If the audio were to be taken away, then the visual storytelling isn’t strong enough to stand on it’s own.
6×9’s Francesca Panetta used audio tour guides as an inspiration for how to use audio in order to cultivate a deeper sense of presence within the physical location being shown. One live-action VR piece that does this really well was a cinéma vérité piece by Condition One called Fierce Compassion, which features an animal rights activist speaking on camera taking you on a guided tour through an open rescue as it’s happening. The live delivery of narration feels much more dynamic when it’s spoken within the moment, and feels much more satisfying than a scripted narration that’s written and recorded after the fact.
A challenging limitation to many NGO empathy pieces is that they often feature non-English speakers who need to be translated later by a translator who doesn’t always match the emotional authenticity and dynamic speaking style of the original speaker. Emotional authenticity and capturing a live performance are some key elements of what I’ve found makes a live-action VR experience so captivating, but it’s been rare to find that in VR productions so far. There are often big constraints of limited time and budgets, which means that most of them end up featuring voice over narratives after the fact since this is the easiest way of telling a more sophisticated story. This formula has proven to be successful for Arora’s empathy pieces so far, but it still feels like a hybrid between traditional filmmaking techniques and what virtual reality experiences will eventually move towards, which I think Murray quite presciently lays out in her piece about emerging immersive storyforms.
Arora’s work with the UN in collaboration with Within has inspired everyone from the New York Times VR to Oculus’s VR for Good program and HTC’s VR for Impact. It also inspired Chris Milk’s TED talk about VR as the “ultimate empathy machine”, which is a meme that has been cited on the Voices of VR podcast dozens of times.
But the film medium is also a powerful empathy machine as Arora cites Moonlight as a particularly powerful empathy piece that was released in 2016. Roger Ebert actually cited movies as the “most powerful empathy machine” during his Walk of Fame speech in 2005. He said:
We are born into a box of space and time. We are who and when and what we are and we’re going to be that person until we die. But if we remain only that person, we will never grow and we will never change and things will never get better.
Movies are the most powerful empathy machine in all the arts. When I go to a great movie I can live somebody else’s life for a while. I can walk in somebody else’s shoes. I can see what it feels like to be a member of a different gender, a different race, a different economic class, to live in a different time, to have a different belief.
This is a liberalizing influence on me. It gives me a broader mind. It helps me to join my family of men and women on this planet. It helps me to identify with them, so I’m not just stuck being myself, day after day.
The great movies enlarge us, they civilize us, they make us more decent people.
Ebert’s words about film as a powerful empathy machine as just as true today as when he said it in 2005. I do believe that virtual reality has the power to create an even deeper sense of embodied presence that can trigger mirror neurons, and may eventually prove to become the “ultimate empathy machine.” VR may also eventually allow us to virtually walk in someone else’s shoes to the point where our brains may not be able to tell the difference between what’s reality and what’s a simulation. But as Murray warns, “empathy is not something that automatically happens when a user puts on a headset.” It’s something that is accomplished through evolving narrative techniques to take full advantage of the unique affordances of VR, and at the end of the day will come down to good storytelling just like any other medium.
Owlchemy Labs recently announced that Job Simulator has grossed over $3 million, making it one of the most successful indie VR titles to date, and so it’s worth reflecting on some of the design principles of agency and plausibility that have proven to be some of the key affordances of the virtual reality medium. I had a chance to talk to Owlchemy Labs’ Cy Wise at PAX West where she shared with me some guiding principles for Job Simulator as well as some of the more existential reactions from users questioning the nature o reality.
LISTEN TO THE VOICES OF VR PODCAST
Wise says that one of the key design principles of Job Simulator was to make sure that everything was interactive. Their goal was to not make it feel like a game, but rather that people would get so lost in the plausible interactions that they’d be able to achieve a deep sense of presence. She cites the example of making tea in that they had to account for the dozens of different ways that people make their tea in order to maintain that level of plausibility that they’ve created in their virtual world. If it’s not intuitive, then the rules and limitations of the simulation make it feel like a game rather than just executing a task given that the affordances of the environment match their expectations of how it should behave.
Designing for agency and plausibility has been a key theme in my previous interviews with Owlchemy Labs’ Alex Schwartz from GDC 2015, Vision Summit 2016, and PAX West 2016.
Owlchemy Labs was able to do such a good job at creating a sense of presence in people that Wise said that it would often create a bit of an existential crisis since it blurred their boundaries of reality. VR developers talk about this as the sense of presence in VR, but there isn’t a common language for people who are having a direct experience of VR presence for the first time.
Wise asks, “How do you talk about the ‘not real’ real? Or how do you talk about the imaginary real life?” And that if people were able to have a direct lived experiences within a virtual simulation, and it felt completely real, then it begs the question of whether or not we’re already living in a simulation. The Atlantic did a profile on people who experienced a post-VR existential crisis that made them question whether actual reality is real or not.
Hassan Karaouni recently told me that if we’re not already in a simulation, then we’re most certainly going to create virtual realities that are indistinguishable for reality that will have intelligent agents within these simulations who will be asking these exact same questions.
Wise has been on the front-lines of having these types of interactions with users of experiences from Owlchemy Labs, and it’s only natural that these types of VR experiences start to make people question the balance between fate and free will in their lives as VR experiences enable new expressions of our agency in what could be classified as an “Erlebnis” direct experience within an incepted virtual reality.
VR is starting to give us more and more experiences that are impossible to have in reality, and our memories of these experiences can be just as vivid as “real life” experiences, which further blurs the line between the “virtual” and “real.” The long-term implications of this are still unclear, but what is clear is that Owlchemy Labs has been focused on the principles of Plausibility and Agency, which mirrors what OSSIC CEO Jason Riggs recently declared that the future is going to be Immersive and Interactive.
If we are in a simulation, then it’s possible that we may never be able to reach base reality. As we continue to experience simulations that are more and more indistinguishable from reality, then perhaps the best that we can do is to strive to reach the deepest sense of presence at each layer of inception that we discover.