Anvio VR is a new, motion capture tracked VR experience that throws you and a bunch of friends into a warehouse-scale VR arena to battle, well, pretty much everything. It looks great!
In many ways, the out-of-home VR industry has done a much better job of fulfilling the early potential and expectations of what virtual reality as a technology can offer, than home systems. With no restrictions on hardware customisation and the freedom to eschew out-of-the box tracking technologies for current generation VR headsets, startups like The Void and VRCade have shown what pure virtual reality attractions can offer right now.
Anvio VR is a new, motion captured virtual reality platform developed by Vortex LLC in Moscow. It uses “top of the line” professional motion capture systems, retro-fitted to off-the-shelf VR hardware (in this case the Oculus Rift), driven by backtop PCs in a 2,150 square feet physical space. Players are equipped too with sturdy looking, motion captured assault rifles and, as every other player’s position is accurately tracked inside the space, they can move around with confidence they’re not going to butt heads at any moment (unless of course you choose to).
A press release from the company says:
Anvio VR is designed to provide complete freedom of movement for the player.
In our virtual worlds you can run, jump, wave and much more, all together with your friends. Our large play area and fully wireless system mean you don’t have to worry about cords or running out of space, creating an incredible level of immersion.
A single arena is able to host different game content which can be switched on the fly.
Anvio VR opened its doors to paying customers a couple of months ago at their first venue in Moscow and the company claim to have already served some 2000 customers since then. They’re also keen to expend beyond Russian borders, with the website hinting at the prospect of a venue in London.
While Anvio VR isn’t technically something new (we already mentioned two of Anvio’s competitors), but I have to confess their no nonsense approach to the gameplay on display in the videos throughout this article and the sheer fun people were having was a little infectious.
USC’s Institute for Creative Technologies works across graphics, learning sciences, medical VR, mixed reality, and much more, and the institute has played a major part in the development of many new technologies that move from research labs into widespread use.
“This is where Palmer Luckey cut his teeth in our mixed-reality labs before he did his Kickstarter,” says current ICT Research Assistant Professor Ari Shapiro. Luckey went on to co-found one of the breakthrough companies in the VR industry with Oculus and its Rift headset, so it is no surprise that ICT could end up serving as the origins for other pioneering pieces of technology.
Easy 3D Avatars
The graphics lab at ICT captures high-quality digital faces for film with special (and expensive) scanning equipment. Shapiro runs a research group there called the Character Animation Simulation Research Group and one of its goals is to create a digital person that can behave like a real one. There are certainly means to do this, but not in a financially accessible way that produces a high-quality final product.
“Can we generate something high-quality with off the shelf scanners and an automatic process,” ask Shapiro. “When you do that, do you essentially democratize this type of data? What if everyone could have their own avatar?”
Shapiro’s team has done studies to determine the objective value of such a tool, learning that there’s an interest in running a simulation with a version of yourself. Then they determined what elements of a person needed to be reflected in these digital creations.
“What other elements of us need to be embedded? Our personality, our style, our posture, and that sort of thing,” he adds.
The research group started testing with Microsoft’s Kinect four years ago, doing some body scanning and facial scanning. It resulted in ways to scan the face/body, attach hands and fingers, etc, but the major key was that it was obtained with off the shelf components. Not only does the removal of specialized equipment drop cost, you also don’t need artists or technicians in the loop. Years later, the video above shows a functional prototype that shows off realistic avatars with realistic expressions to boot.
Where Are We Now?
Shapiro says the plan is to commercialize this tool as much as possible and he sees opportunities for social VR and augmented shopping applications where users can try on different things using a replica of themselves. The software is up and running and they’ve done a few hundred demos so they’re moving rapidly along the path to making this available, but Shapiro says there’s still more to figure out when it comes to the face.
“We’re making a choice that a lot of people don’t make when they do these facial generation systems,” he says. “Most of the time, they have a working facial rig and they try to adapt it to a scan or a photograph.”
With that style, you end up with something that works and is able to “emote your speech” well but it doesn’t resemble the person as closely as desired. You’re basically trying to “fill in data” where you don’t have it.
“We’re doing the opposite. We’re basically saying that whatever you give us, we’re going to use that to reproduce the person. Ours look real with the limitation that, if you don’t give it particular emotional expressions, your character can’t do it,” he explains.
The attempt to stretch and pull these reproductions in order to exhibit emotions is how you fall into the uncanny valley — where an avatar looks almost human but is off just enough to make people seeing it uncomfortable. This is something Shapiro and his team hopes to avoid. They also want to reach a high level of quality for facial scanning that can allow for teeth and tongue modeling too.
Ultimate Goal
“If you have to use specialized equipment that can only be used in specific places, you might as well go down the traditional pipeline,” Shapiro said. He explains that well-equipped visual effects teams can produce content that looks and possibly functions better than what this rapid-scan technology puts out, but accessibility is the end goal.
“We’re trying to work it into a consumer platform,” Shapiro says. “The overall goal of this project is to create a set of technology that anybody can use to produce their avatar for any means.”
Ari Shapiro also serves as CEO for Embody Digital, a company specializing in technology for the “digital you”, and they’re already working on ways to commercialize the technology and make it available to consumers. Seems like it is only a matter of time before you’re scanning yourself into the virtual experience of your choosing.
Whether by haptic claws or well-designed VR controllers like the Oculus Touch, there’s a mission to bring an intricate level of control to VR involving natural interaction with our hands. Since kids, we’ve emulated different devices with our hands like making a gun for cops and robber with index finger and thumb extended from a fist. The CaptoGlove, a wearable piece of tech fully funded on Kickstarter, is taking that idea and bringing gesture-based gaming and computing to life.
The glove itself includes multiple sensors and connects to other hardware wirelessly via Bluetooth 4.0. With it, players are able able to swing, aim, and fire weapons, pilot vehicles, and more with intuitive gestures. The glove also has capacitive ends for the index and thumb so you can manipulate touch screens without having to remove it. With all of these things, it also is said to offer up to 10 hours of continuous play time. There are already quite a few meant to show off the glove in action, most of them involving first-person shooters. There are a few other intriguing demos as well, like using the glove in a handle-bar fashion to steer a speeder bike in Star Wars Battlefront or swiping through menus on your cell phone by just doing a swipe gesture with the glove-equipped hand.
As designed, the CaptoGlove is going to work with old and future games out of the box and include multiple preset control options while remaining fully customizable. It will also work with every VR headset on the market, which is where the device will likely be most impactful. The Vive and Rift have solid controllers, but the CaptoGlove would be a welcome wearable that boosts the immersion of the growing number of virtual experiences.
The CaptoGlove still has more than two weeks left for its funding campaign at the time of this writing, but it has already reached its $50,000 goal. The glove is expected to retail for $250 but promised for $160 via Kickstarter via an early bird special.
A new video produced by IKinema showcases Orion, their new full-body animation system that uses HTC Vive tracking hardware. Their expertise in inverse kinematics results in convincing, affordable motion capture using a small number of tracking points.
The Vive Tracker’s two main uses are to attach to physical objects (such as a peripheral or camera) that can be tracked separately in VR, and to attach to the body, in order to enhance body tracking beyond the head and hands. By attaching Trackers in the most effective places, combined with inverse kinematics, a form of full-body motion capture can be generated; HTC themselves recently released code to illustrate this technique.
In February, an earlier version of IKinema’s ‘Project Orion’ was shown using extra Vive motion controllers strapped to the feet and waist. With the launch of the dedicated Vive Trackers, this technology has now been polished to what is now ‘Orion’, a middleware available from this quarter, that produces convincing motion capture. The latest video (heading this article) shows very impressive results from the raw data capture with no post-processing.
A standard Orion licence costs $500 per year. Requiring a single PC, HTC Vive and three Vive Trackers, the entry price for this quality of motion capture is very low compared to more traditional methods, while being more convenient to set up; achievable in smaller, non-dedicated spaces, and without the need for cumbersome tracking suits.
IKinema, a UK-based provider of high-end middleware and technology since 2006, say that Orion’s solved skeleton output is suitable for game engines, 3D packages, custom rendering environments, and VR/AR experiences. It sees the technology as very useful for game developers prototyping animation, as well as the simulation, enterprise, retail, medical and automotive industries. In its Orion factsheet, IKinema also suggests the real-time nature should prove “particularly useful to those conducting mixed-reality live shows, staged events, promotional on-site VR experiences and VR theatre.”
Lower-priced cameras from long-time motion tracking company Optitrack could slash as much as 40 percent off the cost to track VR headsets and accessories over very large areas. The price cut could accelerate the roll-out of out-of-home VR experiences like The Void.
The Void covers very large regions with Optitrack cameras overhead to find the locations of people, controllers or other objects that are part of the overall story. In The Void’s first public installation in New York, Madame Tussauds offers a Ghostbusters experience that makes visitors feel like they are really catching ghosts throughout a building. Immersion can be dialed up on these “stages” by enhancing the experience with wind, heat or scent effects that tie to the story. Ghostbusters is a particularly smart fit for The Void because you wear a backpack powering the wireless headset that ends up feeling exactly like a proton pack.
When we got a look at the refined Rapture hardware from The Void, co-founder James Jensen noted the controller and headset are no longer covered with external tracking markers.
Typically, Optitrack covers objects or people with lightly-colored reflective balls or dots to track their movements. It turns out The Void is one of the very first systems equipped with Optitrack’s latest “active” system which uses embedded lights covering objects rather than easy-to-break balls. The Void is also now employing a significant upgrade to the visuals seen inside its Rapture VR helmet, and the startup aims to open 20 of its hyper-immersive “stages” this year.
While Valve Software is working on improved base stations for its innovative lighthouse tracking system used by the HTC Vive, we haven’t heard a definitive answer one way or the other about whether the technology might one day be extensible to cover very large regions. Today, a large-scale virtual world like those made by The Void turns to a camera-based tracking technology like Optitrack. IMAX VR, in contrast, equipped room-sized pods with Vive tracking base stations for its VR arcade initiative.
“In 2015, the number of out-of-home VR tracking experiences that we sold into, it was a couple dozen systems,” said Optitrack Chief Strategy Officer Brian Nilles. “In 2016, we probably sold 400 to 500 systems in VR tracking. Some of them are research, some of them are R&D for universities, but a lot of them are out-of-home experiences that are in Asia, Europe and growing in North America as well. So in 2017, it seems like the market is getting traction.”
The Void is just one among a field of companies looking to establish a market for a new kind of destination entertainment mixing elements of storytelling and exploration with paintball or laser tag. A price drop like Optitrack’s with cameras tuned specifically for VR usage could be precisely the boost needed to make these types of locations more common.
From an Optitrack press release, bolding added:
At the core of OptiTrack Active is a set of infra-red LEDs synchronized with OptiTrack’s low latency, high frame rate, Slim 13E cameras, delivering real time marker identification as well as positioning. This differs from OptiTrack’s passive solution, which requires that reflective markers be configured in unique patterns for each tracked object. This can add a great deal of complexity for high volume manufacturing and large-scale deployments of HMDs or weapons. With OptiTrack Active over 100 objects can be tracked simultaneously over areas greater than 100’x100’ (30mx30m)…
The newer Slim 13E cameras are priced around $1,500 while equivalent hardware that used the older “passive” dot-tracking system cost around $2,500. Covering large regions can require dozens of these cameras so the cost adds up very quickly. The image below provided by Optitrack imagines an enormous space with cameras placed overhead evenly throughout.
Back in August of last year, Valve started to roll out of its innovative and royalty-free tracking technology. The company made a development kit available to licensees, but only if they attended a $3,000 training session that would teach the ins and outs of the tech. The introductory course was likely a bit of quality control, but the price of the session was also a daunting obstacle to some. This is no longer a concern, as Valve is removing the requirement of the course, thus making the highly regarded tracking technology more readily available.
Valve has over 500 companies signed up currently, though that number is sure to change a great deal in response to this new development. The original in-person training course will still be available, but the coursework (in English or Chinese) will be available for free.
On top of all this, the SteamVR base stations that emit lasers to track sensors throughout the room will be available directly from Valve later this year.
The tech itself opens up a plethora of opportunities for enhancing the immersion of VR. SteamVR Tracking is a system that works with low-weight sensors that can be placed on various objects so they can be brought into virtual spaces. For example, players could be handed realistic props for baseball, ping pong, or even shooters and they’d be tracked accurately in whatever experiences were built around them.
At the beginning of the year, we addressed the idea of SteamVR Tracking potentially being 2017’s most important VR technology, and it is very encouraging to see it made available in such a way. As it makes its way into the hands of more creatives and engineers, we’ll hopefully be able to find out if a more immersive hardware and accessory ecosystem will bring VR into more homes.
Motion capture specialist IKinema has demonstrated its new in-development low-cost motion capture system which uses use 6 points of SteamVR tracking points to deliver a pretty accurate recreation of real world motion.
The potential diversity of uses for Valve’s SteamVR tracking system Lighthouse is something we’ve pondered before, especially with the company recently opening up licensing for the technology for use in potentially any device. Now a company has used the laser based tracking solution as the core of its new low cost motion capture solution.
This is Project Orion from UK based motion capture specialists IKinema. The in-development solution uses just 6 points of SteamVR tracking, and has the subject in question strap HTC Vive controllers to themselves. The setup demonstrated has one SteamVR controller per foot one, at the base of the back as well as (somewhat more conventionally) two handheld units, with the 6th unit strapped to the head. All of this is tracked with the standard 2 lighthouse laser base stations.
As noted on the video above, IKinema says it’s achieved the above levels of impressive accuracy with no post production – what you’re seeing is captured and rendered in real time. What’s more, it’s not as if the subject of the film is going particularly easy on the capture system, with sideways rolls and even a couple of chimpanzee impressions thrown into the presentation. Project Orion looks to be using inverse kinematics (systems which interpret realistic motion using skeletal structure) to ‘fill in’ the blanks between tracking points and considering how many blanks there are when compared to a more traditional, industry-focused Mo-Cap setup, Orion does remarkably well.
We’ll be interested to see where IKinema take Project Orion in the future, whether it’ll be another licensee to Valve’s open Lighthouse tracking initiative with a hardware solution of its own for example.
Magic Leap is a mystery. Parsing through their pre-factual, post-cool ad campaign leaves you with the impression that the brains at the company have created the next be-all and end-all of augmented reality devices thanks to an ingenious fight field display technology; and as a curious onlooker, getting a peek of the $4.5 billion startup‘s tech will cost you exactly one non-disclosure agreement. Ok. Probably several. Now founder and CEO Rony Abovitz offers a bit more in his recent blog post entitled Creativity Matters, where he speaks about some of the changes coming to the company in 2017, and for what it’s worth, says it’ll be “a big year for Magic Leap.”
In Abovitz’s last update in late December, he reported the successful conclusion of the company’s first PEQ (Product Equivalent) built to their target form-factor, and says a bigger PEQ run will follow in 2017, which is said to “exercise [Magic Leap’s] supply chain and manufacturing/quality operations.”
Current units built by Magic Leap “are for engineering and manufacturing verification/validation testing, early reliability/quality testing, production line speed, and a bunch of other important parameters,” he says.
If you’re partial to artistic flair, Abovitz is happy to oblige in his latest blog post. I’ve condensed it down some here:
“Our first product is coming,” writes Abovitz. “My office in our new building is right next to a small model home we built, right smack in the middle of everything. A home where we can test how Magic Leap will fit into your life each day.”
Rony Abovitz, CEO of Magic Leap | Photo courtesy Magic Leap
“2017 will be a big year for Magic Leap. Enjoy the ride with us – it will be fun. Magic Leap is for the dreamer, the artist, and the wide-eyed kid within us all. But what we are building is no longer just a dream. It is very real, and we are way past the “is it possible stage”. We are not about building cool prototypes. We are scaling up so we can manufacture hundreds of thousands of systems, and then millions. That requires a level of perfection, testing, and attention to detail by determined professionals. We have made something that is small, mobile, powerful, and we think pretty cool.”
“Our photonics may be powered by a novel array of unique nano-structures designed by our otherworldly optics team. Our sensors and computing pack a lot of punch in a small package. But the experience you should have must feel as if it were powered by unicorns and rainbows (and we have had many of those here).”
Magic Leap has only shown video capture of their technology, the most recent of which announcing a partnership with Lucas Films’ ILMxLab to create an AR experience.
In digital experiences, hand-crafting figures and different forms of movement is a chore. While it can sometimes supply a unique experience, there are elements of that type of work that lend to a lack of realism in performance. Motion capture can find its roots in the art of rotoscoping, a technique developed in 1915 where animators trace over motion picture footage. Motion capture takes that to the next level by using different suits and tools to record the movement of subjects for use in various forms of media on 3D models. Motion capture gear fluctuates in quality and price, very rarely hitting that sweet spot of quality and affordability, but Rokoko has developed a suit that impresses on both levels and we discussed the new Smartsuit Pro with them. We chatted with founder and CEO of Rokoko Jakob Balslev about the gear as well.
Rokoko was founded in March 2014 and since then they’ve charged forward with a goal to make a suit that was easy for animators themselves to operate without help. In common images of motion capture gear, you see exposed and typically large sensors but this suit has 19 unseen body sensors, each equipped with gyrometer, accelerometer, and compass, embedded into the suit with a small battery and hub on the rear of it (both the battery and hub are smaller than an iPhone). When breaking down how this suit differs from optical mocap, Balslev said it comes down to three keywords: Intuitive, accessible, and mobile.
Optical motion capture systems (that today cover the vast majority of the market) have problems with occlusion. Cameras need to be able to see the reflective markers at all times – if not, you lose data – and that happens a lot when you are for example having a close interaction between two characters. The sensors in the Smartsuit send a constant stream of data and you therefore never lose data and have much less “mocap cleanup” to do afterwards.
Optical systems also send a very large amount of data (many Gb). The data from the Smartsuit fills up much less space, since it is only numbers and not video files. This makes the workflow much more smooth and light.
You can record or stream up to 5 characters at the same time with just one wifi router in Unity or Smartsuit Studio. With a second router you can have even more characters.
With a tool such as the Smartsuit Pro coming at an accessible price, it could lower the cost for established production companies while providing an affordable tool for newer studios, with teams able create high-quality content for their projects and clients.
We had the chance to see the Rokoko suit in action at our offices in San Francisco and it appeared to function as advertised. The lightweight suit went on a Rokoko staffer in seconds and was ready to record in minutes. The proprietary software asked for the user to hold a “T-Pose” for three seconds for calibration purposes but after that there was essentially no friction between the model’s movements and the movements on the computer.
The system allowed Rokoko to embody several different characters in several different environments and animate them realistically in just a few moments. Some other systems might take quite a bit longer to complete a single scene.
The suit itself is all one piece that zips up easily and has adjustable straps to fit different body types. It is comprised of a lightweight mesh material that moves silently out of consideration to the sensitive microphones that are often present on motion capture stages.
The entire system is available for pre-order today listed for around $2,245 (discount included). It is being targeted primarily for high-end studio production. Balsley did confirm, however, that the company is also working on a much less costly consumer model that would be comprised of a simple jacket and wireless foot sensors.
“Creators should think of this as a creative tool they can integrate on levels where motion capture has never been accessible before,” Balslev says. “Animators can stand up right at their desk and do a recording. Sit down again and test it. As a pre-visualization tool, this suit can change the entire workflow.”
Balslev says creators can rest easy not having to go through a time-consuming and expensive effort to translate their ideas.
“For VR, having affordable tools is crucial as there’s a lot of financial risk developing for an industry that doesn’t have a massive install base in place just yet,” Balslev says. “Hopefully, we’ll see similar balances of quality and affordability prioritized in new tools created for VR down the line.”
–
Disclaimer: Rokoko is a paying member of the Upload Collective, Upload, Inc.’s co-working office in San Francisco. This story was written purely on the strength of its merit as a newsworthy story for the VR community. Rokoko provided no monetary incentive for this story to be written.
Last week Oculus Chief Scientist Michael Abrash stood on the stage at Oculus Connect 3 and talked about where he thought VR will be in five years’ time. He made some bold predictions that are going to take a lot of work and resources to achieve.
That’s why Oculus Research is launching a $250,000 grant initiative looking to advance work in a few key areas. This isn’t like the company’s large investments into content. This money will be split between a maximum of three research proposals based in vision and cognitive science. Research will be carried out between the next one and two years, and submissions should be from academic institutions.
Oculus is looking to make progress in very specific fields with this money, and the findings from successful applicants will be released to the public. The company has outlined what it’s hoping to find in a Call for Research.
The first area the company is looking at is ‘Self-motion in VR’. That doesn’t mean new locomotion techniques, but instead the ways that information sources like a wider field of view affect user’s behavior in “three-dimensional scenes”. “More specifically,” the call notes, “we are interested in how these cues to depth may change the way the visual system uses other sources of shape information…to recover the three-dimensional layout of the virtual or augmented scene”.
Oculus is also looking for a team to develop a way to generate a ‘dataset of binocular eye movements’ within the real world. You might remember Abrash speaking about the complexity of delivering perfect eye-tracking in his talk last week, and this might be related. “While eye movements generated in laboratory settings are well studied,” the call reads, “much less data is available about eye movements in the natural world or in virtual reality.”
‘Multisensory studies’ is next. Oculus wants to understand why VR and AR experiences that cover multiple senses are so much more compelling than those that address a single one. “We would like to determine what features and characteristics make multisensory information so valuable in AR/VR,” the call notes.
Finally, Oculus is interested in “biological motion related to social signaling”. Again, this relates to another part of Abrash’s talk, this time concerning virtual humans. The company wants to establish the gestures, facial expressions, eye movements and other factors that are essential to communicating our intended messages beyond mere words. With a clear understanding of this, we could see more life-like avatars.
Submissions need to be emailed to callforresearch@oculus.com and will be a maximum of five pages in length, outlining methods, budget, and estimated timelines. Reviews for proposals will begin on October 25th and successful applicants will be contacted on November 1st.