Sony’s Richard Marks Expects Natural Voice Input to Play Major Role in VR’s Future

In a recent interview with Glixel, Dr. Richard Marks, head of Sony’s Magic Lab R&D team, talked about PSVR’s development history, social VR, and a possible holodeck-style future. He thinks voice input has unrealised potential, and could become the way users launch into different VR experiences in the future, customising them in real-time thanks to procedural generation.

Following a Christmas break where he studied a robot vacuum cleaner, tested all available voice-input devices for the home (such as the Amazon Echo and Google Home smart speakers), and watched every Black Mirror episode, it was voice control that excited Sony’s head of Magic Lab the most. Marks thinks that a voice-enabled VR environment, perhaps in the form of a procedurally-generated sandbox, where practically any element could be changed at the user’s command, “doesn’t seem very far away.”

Marks imagines a future where voice input technology is set free in VR, limited only by the user’s imagination. He describes a possible virtual environment that is partly procedural, but containing finely-crafted areas created by development teams where users would send most of their time.

“That’s the kind of thing that will involve probably multiple groups and multiple companies even to get all the content that you would want to have happen, but that’s what I think the vision of VR is in the future. That’s why I see it as the holodeck. I just put it on and I can make my world anything I want right now”, he says.

SEE ALSO
On the Hunt for VR's Killer App with Sony's Head of PlayStation Magic Lab, Richard Marks

With apps like Virtual Desktop and even Oculus Home it is already possible to use voice activation to launch VR software from within a PC headset, and there are several interpretations of holodeck-like launch environments available or in development, but Marks is imagining a time where machine learning has taken significant steps beyond where it is today, allowing users to spawn anything from a vast library, or seamlessly interact with virtual characters, with nothing more than a voice command.

Google, who recently claimed to have the most accurate speech recognition, announced its collective AI efforts are now under Google.ai during last week’s I/O 2017 conference, which was heavily focused on machine learning. Their natural language processing is at the cutting-edge of voice technology, but developers are only beginning to explore the complexities and nuances of voice user interface design, as described in James Giangola’s presentation. There are many hurdles to overcome before we can have meaningful and frictionless conversations with our virtual assistants that go beyond a limited set of commands.

Asked why there isn’t a VR version of the most popular games like League of Legends or Overwatch, Marks offers a few reasons, suggesting that the number of available players and budget determines the type of game that can be made, and that sometimes a VR version simply doesn’t make sense without effectively making two different games. He points towards Resident Evil 7 (2017), whose VR mode is currently exclusive to PSVR, as a good example of a game that works on both screen and headset.

“When the game can do it I think it’s a great thing for them to do, because they can take advantage of the huge installed base of non-VR players too”, he says. “But I think once the installed base of VR gets big enough then obviously we won’t have that issue. You can just make an amazingly deep long game that’s super high production value… It just won’t be exactly the same game.”

Referring to Star Trek: Bridge Crew, which launches at the end of the month, Marks talks about the importance of social interaction in VR and in particular, the feeling of ‘co-presence’, and how it will improve in the future as the number of VR users increases, bringing greater incentive to share a virtual space with others. But artificial characters will always have a role to play, and there is a higher expectation for believable interaction with NPCs in VR games. To highlight co-presence using AI, Magic Lab has a ‘believable characters’ demo, where you interact with robots in a playroom using natural gestures and body language.

The post Sony’s Richard Marks Expects Natural Voice Input to Play Major Role in VR’s Future appeared first on Road to VR.

Farpoint review: an embryonic and limited virtual reality experience

Developer Impulse Gear has made an earnest attempt at a VR version of Halo, but the game, and its strange PlayStation Aim Controller, fall short of the target

When the GunCon, a plastic replica pistol for the PlayStation console, first launched in December 1995, it came in just one colour: jet black. Viewed from any distance, the only giveaway that this was a video game controller, rather than an authentic firearm, was the claret-coloured start button on the side of a barrel. Pull a GunCon from a rucksack on a crowded subway and you’d almost certainly cause a terror stampede. When the devices launched in the UK, the law demanded they were recoloured bright blue and red.

There’s no risk of any potentially deadly confusion when it comes the PlayStation Aim Controller, which launches this week alongside Farpoint, a futuristic shooting game built for virtual reality. It’s an impressionistic sketch of a firearm, built from the kind of white tubing you might find under a kitchen sink, with a glowing ping-pong ball fixed to the end of the barrel. If the purpose of peripherals like this aim to narrow the gulf of abstraction that separates activity in a video game from its real-world counterpart (the plastic driving wheel that makes it feel more like you’re driving a Ferrari in Forza, for example, or the wooden gear lever that approximates the Shinkansen’s dashboard in Densha de Go) then this effort seems laughably off-target.

Continue reading...