One of the flagship announcements at last week’s Oculus Connect developer conference, Epic Games’ Robo Recall is a slick, polished first person shooter built for Oculus’ forthcoming VR motion controllers Touch. Here’s 12 minutes of raw gameplay from the Connect 3 demo to give you some idea of what to expect when the game lands early next year (see embedded video at the top of this article).
Road to VR‘s Ben Lang described Robo Recall as “satisfying action-packed fun” after spending time with the game at this year’s Oculus Connect 3 conference. The game, which evolved from the extremely well received tech demo for Touch, Bullet Train, retains a lot of the same core mechanics as it’s predecessor but polishes and hones them.
Now however, the premise is that you, the protagonist, are out to ‘recall’ hordes of malfunctioning, beweaponed mechanoids with brute force – i.e. your fists and a selection of guns. Ben had this to say about the title in his recent hands on article:
The sum of the experience is satisfying action-packed fun. Grab a robot, rip the gun out of its hand, then blow its head off and use the corpse as a shield.
The game isn’t just fun, also impressively beautiful. That’s the norm for pretty much anything Epic has set their minds (and their impressive Unreal Engine) to, but Robo Recall in particular uses some new tech from Epic to look extra sharp in VR.
The title will be made available for free for Oculus Touch owners with three environments each sporting three missions each. The title will appear some time in Q1 2017.
There was quite a lot of information to take away from last weeks Oculus Connect 3 conference, with new social options, experiences and of course Oculus Touch headlining. One of the smaller announcements Oculus’ CEO Brendan Iribe made was on the availability of extra sensors, which would enable Oculus Rift with the Oculus Touch controllers to do room-scale. But one question remained unanswered, what about the USB length? Now Iribe has answered.
This initially began when Tested interviewed Nate Mitchell, VP of Product at Oculus who said: “I don’t want to eat my words on this one, but I believe we are going to include a USB extension cable as part of the [third sensor] package”. This was then confirmed today by Iribe via Reddit, saying: “That’s correct. The additional Oculus Sensors will include a USB extension cable.”
He then updated the posting with: “We’ll have more details when they’re available for pre-order later this month. I’d love to share more now but we’re still finalizing and I don’t want to be wrong. Our goal is for the cable to be long enough to be useful for most.”
There’s a number of reasons why customers wanted this question answered. Firstly Oculus Rift’s rival HTC Vive includes room-scale tracking right out of the box, and the sensors don’t require a cable to a PC, they’re wireless apart from a power cable. This means they can be put anywhere a user deems suitable to maximise the play area they have available.
But the Oculus Sensors all require a direct USB connection to a PC. Fine for the single sensor, or using the second that comes supplied with Oculus Touch. But for true room-scale the third would generally need to be placed behind the user, so that no matter which way they’re facing they can always be tracked. While the sensor cables are of a reasonable length, they’re certainly not long enough to give complete freedom of placement, hence why a USB extension lead is so important.
The extra Sensor will retail for $79 USD and begin shipping on 6th December 2016. The only question left to be answered is, how long is the extension cable, which will likely be answered when pre-orders go live later this month.
For all the latest Oculus news, keep reading VRFocus.
In this episode of the Voices of VR podcast, I recap some of my highlights from Oculus Connect 3, but also dive into some of my biggest concerns and questions coming out of this year’s big developer conference. My two biggest concerns were the lowering of the minimum specification for an Oculus-ready machine, as well as some new announcements from Oculus about their support for room-scale VR.
LISTEN TO THE VOICES OF VR PODCAST
These topics are difficult to really cover since a lot of information is protected by non-disclosure agreements, but I was able to talk to a number of different developers off-the-record and synthesize these conversations into this op-ed podcast. There are still a lot of open questions, unknowns, and concerns that I try to dig into a bit more, as well as the overall vibe and other private hallway discussions that were happening at the San Jose Convention Center.
Among the swath of announcements made at Oculus Connect 3, one of the fleeting but important pieces of content-related news is that Walt Disney Studios is working with Oculus to create a series of VR experiences based on iconic characters and franchises.
We’ve already seen some pieces of content from Disney in the modern VR space, such as the Disney Movies VR app currently on Steam, but that initial half-hearted attempt doesn’t offer the kind of value you might expect from a Disney production.
Now that Oculus is working with Disney, we may see something more impressive, maybe on the level of Henry, an early Oculus Studios animated VR film that recently won an Emmy. Oculus at least has boldly claimed that the collaboration would lead to high production content.
“Oculus technology mixed in with the magic and creativity of Disney is going to set a higher bar for VR,” said Yelena Rachitsky, Creative Producer at Oculus.
The first fruits of that collaboration will arrive later this year, she said, and will focus on Disney’s “most beloved characters.”
Disney was also once an early pioneer of virtual reality in the era passed, having applied early VR technology to some of its theme park attractions, though much of the technology is now out of date compared to the latest wave of VR.
Frank He gets his hands into Ready at Dawn’s first VR title and finds Lone Echo to deliver one of the best zero g experiences yet seen in VR, all thanks to clever design and of course the Oculus Touch motion controllers.
Out of the several high-budget games announced at Oculus Connect 3, one of the most mysterious might be Lone Echo, an Oculus Touch title by Ready at Dawn, developers of PS4’s The Order: 1886 and PS2 classic Okami.
You may have struggled to grasp precisely what Lone Echo is all about having watched only the debut trailer (below), but after playing the single player demo at OC3, here’s my one line synopsis: Lone Echo is a first person VR adventure game, where you play a robot-piloting AI named Jack who serves a human crew investigating an anomaly in deep space.
With an original narrative, high quality art and sound direction, and an innovative movement system, it was one of the most promising demos for a VR game I’ve tried. The demo takes place entirely in zero gravity, and the locomotion they’ve developed for it hasn’t been seen in any other space themed VR game out currently, so let’s start there.
If you think back to the instances in movies or videos where astronauts are throwing themselves around in space by grabbing and pushing against a wall or a handle, ‘Lone Echo’ might be exactly what you’d imagine that should feel like in VR, at least with the limitations of today’s consumer hardware. I found Lone Echo finally edges closer to providing the zero gravity experience I’ve always wanted.
Previous VR experiences – ThreeOneZero’s ADR1FT for example – have mostly been about using thrusters to navigate space, but with the possibilities VR motion controllers open up, you can use your hands to reach out and grab any object and surface in the game, be it a railing, or another floating human, and push off the object in order to propel yourself.
There’s also a system where, when grabbing onto things, your virtual robotic hands get locked to the surface, with the the fingers properly configured as you’d expect them to, no matter what shape it is. This was a software technique Ready at Dawn pulled off just in time to show it at Connect, and – despite some glitches – it definitely helps sell your interaction with the environment. The game also allows you to traverse space via mounted thrusters, controlled by directional pointing of the Touch device.
While it did feel to me like this game truly let me explore in zero gravity with fewer constraints than before, the very nature of zero g movement will be alien to most people, and so nausea is a risk. However, I was told by one of the developers that slow acceleration, one of the primary agitators of motion sickness in VR, was mostly done away with in Lone Echo. Instead, velocity would ramp up almost instantaneously in most cases. There also seemed to be no artificial rotation that I could detect, with the player using their physical body to turn. That may explain why I felt totally comfortable playing through the demo, with none of the ‘swimmy’ feelings you might expect with such an experience.
With that compelling locomotion demonstrated, I was keen to see what the developers could do with it. Starting with a narrative introduction, I found it difficult to pay attention, as I was preoccupied with looking around, gazing at the environment. Plus, my new found robotic body, which was fully rendered and visible. According to the human, I was serving as Jack the AI, and that I needed to be calibrated first. That of course served as the convenient tutorial section, consisting of basic obstacle courses designed to get me up to speed on on movement, motion controlled laser-cutting, holographic button pushing, and the usual stuff you do in space as a digital-mechanical being.
An extra but notable little detail they added in the calibration phase was that when the captain came to investigate you, as she does in the above trailer, should actually reach out to try and touch her, she would react by leaning back to keep out of your grasp. If you were quick enough, you could also snag away her notepad, which she would also react to. Other NPCs showed similar behavior.
Once the demo begins, there isn’t a huge amount of action, as you’re sent too and control a device that somehow modulates the space anomaly, some interference from an unknown force occurs, and your captain gets you to go save her as her leg got stuck on something in the chaos. Then it fades to dark as you see a giant space ship come out of the anomaly. Most of the types of things I did in the demo could actually be seen from the trailer, but what made it all worthwhile was the combination of well crafted elements in sound, voice acting, art assets, and other small details, like the behavior from NPCs, that made for a consistently immersive story driven experience, especially when the locomotion system melts away like it did.
I came away impressed with what Ready at Dawn had to show at Connect with Lone Echo. The title exhibits originality in its choice of locomotion choices, and at the same time managed to convey the traversal of space better than any other VR title I’ve tried yet. Alas, we have no release date to look forward to as yet, but the developers did at least tell me that this will be no lightweight ‘VR Experience’ when it finally appears for the Oculus Rift and Oculus Touch.
Despite Google’s official unveilingof the forthcomingDaydream mobile head-mounted display (HMD) earlier this week, mobile virtual reality (VR) is due a revolution already. The Oculus VR and Samsung collaboration on Gear VR may only be a year old but in an industry moving this fast is already looking long-in-the-tooth. The next holy grail of mobile VR is inside-out tracking – allowing for head-tracked movement without an external camera – and Oculus VR has not only come along with the technology to adequately represent this, but roomscale movement tracking also.
Intel and Qualcomm have both revealed mobile HMD prototypes in recent months that allow for inside-out tracking of the head, delivering an experience not too dissimilar to that which is currently available on the Oculus Rift (sans Oculus Touch and additional Constellation trackers) or the soon-to-launch PlayStation VR. However, Oculus VR’s Santa Cruz prototype goes one further by incorporating free movement. This isn’t simply allowing look, tilt and distance to be tracked: Santa Cruz allows the user to move freely around a space much like the HTC Vive.
The demonstration software was very simple. An opening science-fiction scene with no action was presented to allow the user to acclimatise to the volume they could move within and get their first demonstration of the visual chaperone system: a grid which currently looks very similar to that of the HTC Vive’s. Soon thereafter, the user is transported to a new version of the paper city demonstration first seen alongside the Oculus Rift Crescent Bay prototype at the original Oculus Connect event two years ago; VR is most certainly becoming cyclical.
Three sequences within this scene had the user walking around a field and taking in the view, witnessing planes flying overhead and dogs barking before things turned for the worst: alien invaders. Not long into the short demonstration UFOs come into the scene and begin abducting the white silhouette people, with the climax (or rather, somewhat of an anti-climax) being your own abduction. The software was less thrilling an experience than the impressive technology itself. Again, the sensation of being wowed more by hardware than the software takes us back to a time when the shape that consumer VR would initially take was still a mystery.
The tracking was very well realised. Spatial awareness was incredibly accurate, as was head movement in terms of depth. Turning did present some irregularities however: the image displayed did seem to lag at times, seemingly uncertain as to whether your body was moving or just your viewpoint.
As for that hardware, the Santa Cruz demonstration unit was essentially an Oculus Rift, broken apart and cobbled back together with new components. The external housing hosted four cameras on the front panel which determined your position relative to the preset volume – no information is currently available on how to set that volume – and the rear has a large bulk containing all of the on-board computational hardware. HDMI and USB cables are visible for transferring data from this rear processing to the internal monitor and vice versa, resulting in a much more delicate feeling design than that of any previous Oculus VR hardware; however, the inference that this is very early hardware was reinforced throughout.
Santa Cruz is a long way from consumer release. That’s evident from the HMD on show, if not from Oculus VR’s own track record. However, the device is very impressive even in its current state, and from here on out it’s only going to get better. When will we see inside-out tracking on a mobile HMD from Oculus VR? It’d be impossible to guess right now, but you can rest assured that when it does come it will be of a very high standard.
Oculus announced at Connect, the company’s annual developer conference, that they’ll be officially supporting WebVR through their new VR web browser codenamed Carmel. WebVR isan API that provides headsets access to web-based VR content.
Touted as an easy way to share VR experiences over the web, WebVR allows JavaScript developers a way of delivering simple VR content into the hands of anyone with a VR headset just by navigating to a URL (i.e. no long downloads or installs necessary).
Oculus co-founder Nate Mitchell took the stage and presented the new VR browser, stating the WebVR initiative “is going to lead to an exponential growth in VR content out there. Everyone in the future is going to have their own VR destination on the web.”
Mitchell then introduced a number of usecases for prospective developers, some that he said could even be completed in just a few days like a web-based photo sphere site, or a 3D rendering of a new car.
Oculus says Carmel is optimized for performance, designed for navigation and input in VR, and will be tightly integrated with Home and “run on any Oculus device.”
Samsung’s Gear VR web browser ‘Samsung Internet’ already has preliminary support for WebVR, but the move by Oculus to support it directly and offer the tools to do so means they’ll be throwing their full weight behind the initiative.
To help developers build VR web content, Oculus also announced React VR, a VR-focused version of the React open source javascript library created by Facebook in 2013 that helps developers build user interfaces for web-based content.
A developer preview of Carmel is said to come later this year along with React VR. Oculus has listed a number of real-world examples on their WebVR page to give prospective devs an idea of what to build for the coming VR web.
At Oculus Connect’s opening keynote today, Facebook founder Mark Zuckerberg took to the stage to drive home the company’s commitment to the virtual reality space. How much commitment? Some $500M worth.
Amid a flurry of announcements at Oculus Connect 3’s opening keynote, the central message of the event is that content is crucial and, now that the VR industry is moving beyond the initial challenges of delivering hardware that works, it’s the presence of truly groundbreaking and compelling VR experiences that will make or break immersive technology as the next platform.
To drive this message home Mark Zuckerberg, founder of Facebook, who acquired Oculus in 2014 for $2Bn, took to the stage to take part in a confident and altogether impressive demonstration of the company’s latest advancements in social VR. After rejoining the real world, Zuckerberg went on to highlight how much money Oculus and Facebook are pumping into the development community to help ensure becomes a lasting phenomenon. “We’re really committed in helping this community build all kinds of [VR] experiences,” said Zuckerberg going on to state that, “we have already invested more then $250M dollars into this community, to fund the development all kinds of content.” Zuckerberg then went on to pledge to developer that Oculus and Facebook will pump a further $250M into future content projects.
Oculus have repeatedly stated that sitting back and waiting for high-quality virtual reality games and applications to appear isn’t good enough. Their strategy since well before the consumer Rift launch has been to actively stimulate and support developers keen to help build VR content in a small, risky marketplace – helping them mitigate that risk. This has led to some controversy over the resulting platform exclusives which have emerged, but it’s an approach that has arguably led to some compelling VR software.
The approach isn’t unique to Oculus of course. Although Valve have admitted to funding development for VR projects, they are characteristically coy about precisely how, or indeed how much. SteamVR partners HTC – manufacturers of the Vive – recently launched initiatives to inject $100M into help grow the VR industry with its Vive X program.
Up until now, multiplayer games on Oculus Home have relied on their own individual avatar selectors, meaning you couldn’t have the exact same avatar in two different apps. In an effort to socialize the entire platform, Oculus is introducing an avatar editor that will let you style a single avatar and use it across multiple apps.
Mike Howard, product manager for Oculus Avatars, demoed the editor, changing his virtual appearance through a slider-based system. An interactive virtual mirror renders your new choices before you and even lets you physically swap out accessories using Oculus Touch, the company’s soon-to-release hand controllers.
The new avatar editor is said to offer “more than one billion permutations,” letting you pick clothing, accessories, hair, face shape, and avatar texture.
From what we’ve seen at Connect, Oculus Avatars doesn’t provide a few things that other avatar editors do like a full body or detailed color options, meaning you’ll have to pick a single texture that will cover your entire avatar—certainly not as detailed as the Facebook social VR prototype we saw only moments earlier on stage.
Notably, Oculus Avatars also doesn’t animate any part of the head/upper chest asset, instead opting to light up the avatar momentarily when speaking through the microphone. All of this may be necessary to maintain fluid cross compatibility when the feature launches on both Gear VR and Rift.
The Avatars SDK, according to co-founder and VP Nate Mitchell, “makes it easy to integrate Touch interactions into your game, and additionally, true hand presence. Because it brings people’s avatars into your experience, people can actually feel like themselves and easily recognize their friends.”
According to an Oculus blogpost, Avatars will be available for Rift at Touch launch and for mobile in early 2017.
At a session breaking down the design of Touch during Oculus Connect, designers of the controller explored the choices that drove the form and functionality of the device.
Oculus Touch finally has an official launch date of December 6th and will begin pre-orders on Monday. Now that the design has been finalized after many engineering sample revisions seen over the last several months, the design team is talking about how the controller came to be.
Among the interesting info shared during a presentation at Oculus Connect, Touch Lead Electrical Engineer Jason Higgins said that the controller’s battery life had been optimized throughout the development cycle to the tune of a 40% improvement over the original ‘Half Moon’ prototype that was first revealed back in 2015. That improvement allows the controllers to run up to 30 hours on a single AA battery without haptics or up to around 20 hours with haptics, Higgins said.
And yes, it was confirmed that the final version of Touch will be powered by a removable AA battery that stows in the handle of the controller under a magnetic cover. When asked why the team chose not to go with an inbuilt rechargeable battery, Higgins said that the decision was largely driven by not wanting there to be a situation where a dead battery would prevent a player from jumping into VR right away; thus players would be able to simply swap in a new battery rather than waiting for a recharge.
Another change for the final Touch controller confirmed in the talk was the removal of internal magnets from a prior engineering sample. Previously the magnets had been added as a way to clip the controllers together for elegant storage as they sat out of use. However, players would sometimes feel the pull of the magnets when the controllers were near to each other, and ultimately the designers said they wanted to optimize for the experience during use rather than the storage use-case. So out went the magnets.
Also during the talk was the demonstration of a new pinch gesture that we hadn’t seen before. Through a series of capacitive sensors, Touch can understand some particular poses of the player’s hand, like a ‘thumbs up’, or a pointed index finger. Now with the capacitive pad on the face of the controller, users can form a pinch pose (that looks like an ‘OK’ gesture) by putting their thumb on the face-pad and pulling the trigger with their index finger. We also confirmed that the all-important ‘middle finger’ gesture could be supported by developers, but it isn’t one of the SDK’s pre-programmed gestures.
And finally, there was a glimpse of what’s likely the final box that Touch will ship in, including the extra sensor that will be included for the $199 price.