Amazon’s VR-capable Lumberyard Game Engine Sustained by AWS Cloud Services

hao-chenBack in February, Amazon announced that they had purchased and forked Crytek’s CryEngine and made the full source code available in a free AAA VRgame engine offering called Lumberyard. The only catch is that if you want to use any public cloud service, then you have to use Amazon Web Service offerings.

At SIGGRAPH, I had a chance to talk with Hao Chen, a senior principal engineer for Amazon Lumberyard, about the Cloud Canvas visual scripting interface to AWS and GameLift multiplayer offerings. We also talked about some of the research and development areas such as integrated artificial intelligence offerings, natural language processing with Alexa, potential ecommerce solutions, and research into digital lightfield capture, compression, and delivery.

amazon-lumberyard-scene
Sunset Meadow – Created with Amazon Lumberyard | Photo courtesy Amazon

Amazon wasn’t making any specific new product or gaming content announcements, but it’s clear that part of Amazon’s long-term strategy is to rely upon game developers using their public cloud services in order to fund and sustain future development on their Lumberyard game engine. You can hear more about some of the existing features and functionality of Lumberyard as well as some future research on today’s episode of the Voices of VR podcast.

LISTEN TO THE VOICES OF VR PODCAST


Support Voices of VR

Music: Fatality & Summer Trip

The post Amazon’s VR-capable Lumberyard Game Engine Sustained by AWS Cloud Services appeared first on Road to VR.

VR Design Best Practices with Google VR’s Alex Faaborg

alex-faaborgAt the company’s annual I/O conference, Google announced the Daydream VR platform and mobile headset that will be coming to the latest Android phones later this year. There’s a DIY dev kit that you can start using today to start developing Daydream-ready apps, and Google has also released a Google VR Unity SDK that includes a number of DaydreamLabs Controller Playground examples to demonstrate different user interactions with the 3DOF controller.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Google VR’s Alex Faaborg at the Casual Connect conference where he talked about some of the VR design best practices, some of the early survey results from Google showing an average VR play time of 30 minutes per session, what can be learned from Pokémon Go, the differences between Tango and Daydream app design, social norms of using VR around other people, and the future of conversational interfaces.

Here’s the presentation from Google I/O on Designing for Daydream:


Support Voices of VR

Music: Fatality & Summer Trip

The post VR Design Best Practices with Google VR’s Alex Faaborg appeared first on Road to VR.

Documenting the Evolution of VR Headsets with Zenka’s Sculptures

zenkaThe artist Zenka has been documenting the evolution of virtual reality by making raku sculptures of VR headsets. She’s also created an interactive timeline of some of the major VR and AR headsets. Technology has been progressing so quickly that looking back at cell phones from 10-20 years ago starts to feel like ancient history. Zenka feels the same way about VR and AR headsets as we start to see more patents like Sony’s smart eye contacts or Google’s cyborg eye implants.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Zenka at the Rothenberg Founder Field Day in May where we talked about her VR HMD art project, her other augmented reality art projects, some of her thoughts about identity and revisiting nostalgic memories in VR, and some of her other anthropological observations about this moment in history.

See Also: These Brilliant Sculpture Masks Chart Virtual Reality’s History

Here’s a video of some of Zenka’s recent AR installations:

Here’s a picture from the 2014 IEEE VR conference of a collection of head-mounted displays curated by NASA’s Stephen Ellis.


Support Voices of VR

Music: Fatality & Summer Trip

The post Documenting the Evolution of VR Headsets with Zenka’s Sculptures appeared first on Road to VR.

Khronos Group President Neil Trevett on glTF: The ‘JPEG for 3D Objects’

Neil_TrevettThe Khronos Group announced that the open standard of glTF was gaining momentum by some of the key players within the graphics industry before SIGGRAPH this year. glTF provides a standardized baseline and interchange format to deliver 3D content to different tools and services; it’s been described as being analogous to the flexible and widely used JPEG format for images. The traction for a glTF open standard means that one of the fundamental building blocks for the metaverse is coming into place.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to sit down with the Khronos Group President Neil Trevett at SIGGRAPH where he explained the significance of this emerging consensus. He expands upon what glTF includes, and what it doesn’t. For example, there are not any point clouds or light fields within glTF. He also emphasized that previous efforts for an open format like VRML and X3D have included logic and code in the file, but glTF is meant to be simply a lightweight wrapper for 3D objects and textures. The code and logic for what to do with these assets will be in a separate format since it could range from JavaScript to C# to C++ to other emerging languages.

Neil said that an open standard for transmitting mesh data is something that many major companies were already independently working on. One of the primary use cases is being able to transmit 3D objects between all of the major authoring programs and eventually onto the web. But the Khronos group had already a proposed the glTF spec, and most of the preliminary feedback was that it could be a viable solution that solves many of the most difficult problems.

There are some available extensions to glTF, like physically-based rendering in order to add material and reflective properties. But Neil emphasized that they want to keep the initial glTF specification lean and simple in order to make it simpler to implement to maximize the adoption. They’ll be paying attention to popular extensions, and if there’s wide enough adoption then there’s a chance that it’ll be rolling into the official glTF specification.

There’s a glTF validator that’s available, and for more information, then be sure to check out this glTF resource page on the Khronos Group’s website.


Support Voices of VR

Music: Fatality & Summer Trip

The post Khronos Group President Neil Trevett on glTF: The ‘JPEG for 3D Objects’ appeared first on Road to VR.

Telling Stories in VR with Improv Acting and ‘Mindshow’

Cosmo-ScharfAt VRLA Summer Expo 2016, Visionary VR premiered Mindshow, an interactive storytelling platform which allows users to animate and voice characters by inhabiting those character’s bodies.

I had a chance to catch up with Visionary VR and VRLA co-founder Cosmo Scharf, where we talked about some of the inspiration behind Minshow including the Buddhist philosophy of Alan Watts and the post-symbolic, direct experience ideas from Terence McKenna.

LISTEN TO THE VOICES OF VR PODCAST

In Mindshow was quite a unique experience to be able to record an improv acting session within the body of a virtual character and then step outside of myself to then watch the performance unfold from an out-of-body vantage point. I’ve recorded myself with a 2D camera plenty of times before, but there’s something qualitatively different in being able to watch ‘my’ body movements while immersed within a spatial environment.

The core mechanic of reacting to a story prompt was simple and intuitive, and the number of variations in how a scene plays out is only limited by human creativity. The initial Mindshow demo at VRLA had a simple linear capture where you could layer additional characters within a scene while you have previous takes play back to you. You could develop an entire story by rapidly iterating different performances of yourself much like a looping musician might construct a song.

But the true power of Mindshow will be in the collaborative features where you’ll be able to communicate with your friends with the power of the direct experience of a story, rather than by using abstracted and symbolic language. You could pass a scene back and forth to each other like an asynchronous improv performance, or you could eventually interact in real-time, once that feature is implemented.


Support Voices of VR

Music: Fatality & Summer Trip

The post Telling Stories in VR with Improv Acting and ‘Mindshow’ appeared first on Road to VR.

Overcoming Fears of Public Speaking with ‘Speech Center VR’

Cerevrum is building an ambitious educational platform starting with training people to become better public speakers with Speech Center VR. The basic mechanic is that you stand in a variety of different virtual rooms in front of an animated crowd of virtual listeners as you give a presentation. The app is designed to help people get over their fears of public speaking, but there are many other educational learning opportunities from a number of upcoming courses featuring public speaking coaches.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with CEO Natasha Floksy and COO Olga Peshé to talk about designing their two educational applications Speech Center VR and their brain training application Cerevrum.

Natasha Floksy Olga Peshé

Natasha has an art degree, and there’s a strong design aesthetic imbued within every dimension of Speech Center VR, from the different rooms to the user interface to the highly customizable avatar system, which is one of the more impressive aspects of the experience. At the moment, you are a disembodied ghost, and so you never fully appreciate your own selected digital identity. But there is a wide array of identity choices, along with many different features and functionality within the app.

You can download Speech Center VR for free, upload a presentation PDF, and record yourself talking to a room full of virtual strangers. There are also interactive social components as well just in case you wanted to hold an intimate meetup there. There are a number of in-app purchases for getting a chance do some practice training within a variety of other different public speaking engagements. There’s actually a surprising amount of functionality included within the experience, including a supplemental eye training application to help to improve your vision.

There’s a number of small improvements that could be made including having a monitor for the presenter, improving the social behaviors of the virtual audience to be a little less uncanny, and having a little bit more intuitive way to advance slides more than swiping down on the side of the Gear VR headset. But overall, Cerevrum has built a robust educational platform with a lot of room to grow into many specific domains.


Support Voices of VR

Music: Fatality & Summer Trip

The post Overcoming Fears of Public Speaking with ‘Speech Center VR’ appeared first on Road to VR.

Social Presence & Training Social Dynamics with Virtual Humans

andrew_robbTeam training scenarios are often difficult due to the logistics involved with coordinating many different people’s schedules. One solution has been to use virtual humans as stand-ins for actual humans in team training scenarios where the conversation is mediated by ‘Wizard of Oz’ interactors who are puppeting the virtual humans. The goal is to recreate a sense of social presence so that the person being trained forgets that they are interacting with virtual humans instead of actual humans.

LISTEN TO THE VOICES OF VR PODCAST

There still needs to be a human in the loop to be able to interpret and respond to the primary person being trained, but the human interactor operating the virtual human can respond by selecting a number of pre-recorded scripted responses. Even though real humans are almost always preferred, using virtual humans can give more accuracy and repeatability to the training scenario, and provide similar strong results with more efficiency.

mindshow-vr
See Also: ‘Mindshow’ Revealed by Visionary VR – Stage, Animate, and Record Your Own Stories in VR

Andrew Robb is a post-doc at Clemson University, and he’s been researching how to use virtual humans in these types of team training scenarios. Specifically, he talks about training nurses to stand up to surgeons when they want to proceed with a surgery despite replacement blood not being ready yet, which would put the patient’s life in danger if there’s a complication in the preparation process. These are complicated social dynamics and if a nurse isn’t comfortable in speaking up, then it could result in the patient dying. So Andrew has been focused on how to recreate a sense of social presence using virtual humans in order to create a team social dynamic that allows nurses to get practice and training so that they have the confidence to speak up against someone on their team who wants to violate safety protocol.

Andrew mentions a paper by Frank Biocca and Chad Harms titled “Defining and measuring social presence: Contribution to the Networked Minds Theory and Measure, which sets out some definitions for social presence and a networked minds theory for understanding the mechanics of social presence in a virtual environment. They say that: “Most succinctly defined as a ‘sense of being with another in a mediated environment’, social presence is the moment-to-moment awareness of co-presence of a mediated body and the sense of accessibility of the other being’s psychological, emotional, and intentional states.”

I had a chance to catch up with Andrew at the IEEE VR conference where he talked about his experiments in using virtual humans within team training scenarios, some of the research of how humans self-disclose more information to virtual humans, how gaze behavior could provide an objective measure for social presence, and more details about other theories of social presence and co-presence that provide models for how we create models of people’s minds, feelings, and motivations.


Support Voices of VR

Music: Fatality & Summer Trip

The post Social Presence & Training Social Dynamics with Virtual Humans appeared first on Road to VR.

‘Headmaster’ & Why the Physics of Stuff Flying at Your Face is so Compelling in VR

ben-throopWhen Ben Throop went to the Boston VR hackathon in June 2014, he didn’t know that Valve was going to be showing off some prototype VR hardware that had positional tracking. At this point the Oculus DK2 had not shipped yet, and so he was able to build a VR game using his soccer experience to head soccer balls into a goal. He wanted to see how it felt, and was surprised that it was actually a lot of fun. He decided to continue working on it, and last year the game, Headmaster, was first announced at E3 as a PlayStation VR launch title. Frame Interactive was back again this year at E3 showing off the game at PlayStation VR’s booth.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to catch up with Ben at Sony’s GDC press event where I talked to him about the game design principles behind Headmaster, why even non-gamers love to play it, and why the physics of things flying at your face are some compelling in VR.

Here’s the trailer for Headmaster:


Support Voices of VR

Music: Fatality & Summer Trip

The post ‘Headmaster’ & Why the Physics of Stuff Flying at Your Face is so Compelling in VR appeared first on Road to VR.

Research on VR Presence & Plausibility with VR Technical Achievement Award Winner Anthony Steed

anthony_steedOne of the gold standards of a VR experience is being able to achieve Presence, but Presence is an elusive concept to precisely define. Mel Slater is one of the leading researchers into Presence and says that it’s a combination of the ‘Place Illusion’ and ‘Presence Illusion’, which Richard Skarbez elaborates by saying that the Place Illusion represents the degree of immersion that you feel by being transported to another place, and the Plausibility Illusion is the degree to which you feel that that the overall scene matches your expectations for coherence.

LISTEN TO THE VOICES OF VR PODCAST

Anthony Steed is a professor in the Virtual Environments and Computer Graphics group in the Department of Computer Science, University College London. I had a chance to catch up with Anthony at the IEEE VR conference where he talks about doing distributed Presence research with a Gear VR, the role of plausibility in Presence, how social Presence fits into Mel’s two illusions of Presence, and some of the discussions about sharing knowledge between game developers and academics that happened at GDC and IEEE VR conferences this year.

brendan-iribe-oculus-presense
See Also: Oculus Shares 5 Key Ingredients for Presence in Virtual Reality

Anthony studied under Mel Slater, and he was a co-author of one of the major Presence surveys referred to as the Slater, Usoh & Steed survey in the “Depth of Presence in Virtual Environments” paper. Anthony was also the winner of the 2016 Virtual Reality Technical Achievement Award presented at the IEEE VR conference this year.

Here’s a video of the Presence Experiment that Anthony conducted on the Gear VR, and where he found that tapping on your body during the music without having your hands tracked had a negative impact on embodiment.

Here’s the 2015 IEEE VR poster from Richard Skarbez talking about his Presence research into the Place Illusion and Plausibility Illusion:


Support Voices of VR

Music: Fatality & Summer Trip

The post Research on VR Presence & Plausibility with VR Technical Achievement Award Winner Anthony Steed appeared first on Road to VR.

AMD’s Roy Taylor on New GPUs, Tools, & Consumer VR Pods

Roy-Taylor

I had a chance to catch up with AMD’s Roy Taylor, VP of Alliances, Content, and VR, at VRLA to hear more about AMD’s recent announcements, their open source philosophy, their support for VR storytellers, and the upcoming VR on the Lot event on October 13 & 14th that will be helping to educate the film industry about VR.

LISTEN TO THE VOICES OF VR PODCAST

AMD has made a number of different announcements over the past couple of weeks at both SIGGRAPH and VRLA. They announced some new Radeon Pro graphics cards designed for professional visual effects creators, as well an open source VR video stitching software called Project Loom (to be released on GPUOpen later this summer), as well as rebranding and open sourcing their ray tracing program ProRender, which is a rebrand of FireRenderer.

At VRLA, AMD announced that they’re bringing VR demos to the masses in public spaces like malls and movie theaters in partnership with Awesome Rocketship. In order for VR to be successful AMD is helping to support initiatives to make more accessible for consumers to try. AMD also announced the least expensive VR-ready PC that meets Vive’s and Oculus’ minimum specifications with the CYBERPOWERPC Gamer Xtreme VR for $720 now available on Amazon.

Radeon Pro is being sold to visual effects professions and VR storytellers:

Here’s a teaser trailer for the Awesome Rocketship VR demo pods that AMD will be helping to bring to malls, movie theaters, and other public areas where people gather:

And here’s several of my tweets from AMD’s SIGGRAPH presentation highlighting some of their important announcements:


Support Voices of VR

Music: Fatality & Summer Trip

The post AMD’s Roy Taylor on New GPUs, Tools, & Consumer VR Pods appeared first on Road to VR.