‘Sansar’ Will Open to All in First Half of 2017 with a New Approach to Virtual Worlds

Sansar, a virtual world platform from the creators of Second Life, will open its doors to everyone in the first half of 2017. Developer Linden Lab explains their new approach to virtual worlds, and the many ways they plan to enable creators to make money with compelling virtual experiences.

Having launched Second Life more than 10 years ago—a virtual world that in 2016 alone had a GDP of some $60 million—Linden Lab has deep expertise in world worlds that goes back long before the recent rebirth of consumer virtual reality hardware. And now the company is building a new virtual world platform, Sansar, to serve the next generation. But this is no Second Life 2.

Sansar takes a fundamentally different approach than Second Life. While the Second Life model was about running a persistent virtual world that existed in one massive virtual space, Sansar’s aim is to be more of a platform than a singular virtual world. I recently spoke with Linden Lab CEO Ebbe Altberg at the company’s offices in San Francisco where Altberg told me about Sansar’s new approach.

“Between the Creator and the Consumer, Second Life never really settled on which was our primary customer,” Altberg said.

With Sansar, Linden Lab’s focus is firmly on the Creator. The company wants to make it easy for creators to make discrete virtual worlds and experiences. Linden Lab envisions its success as helping creators easily build and monetize virtual experiences (and taking a cut from successful creators). Virtual experiences built on Sansar will be self-contained spaces that users to jump around to, rather than one big virtual world that could be traversed continuously from end to end. Instead of ‘traveling’ from one place to another, like in Second Life, you’ll just hop in and out of experience at will, like jumping from one webpage to another with links.

Overall, Sansar aims to be more like an app store or a platform (like WordPress, Alterberg said) than one big virtual world. That solves a number of key problems that the company has identified with Second Life’s model, Altberg told me. One of the biggest of which is ‘discovery’. Second Life is (and is marketed like) one big virtual world. There’s an incredible variety of things to do in Second Life, that appeal to different people who want different things. But because it’s all contained inside one giant virtual world, it’s challenging and inefficient to market the entire world to those who might just want one particular thing from it.

Imagine if YouTube tried to get you to use their platform by saying “we have 80 million videos! Come see them!”… that’s neat, but lacks a certain appeal because I’m only going to watch a fraction of those videos, and who knows if I ever want to watch any of them? But, if a friend links me to one funny or interesting video, that’s much more likely to get me onto YouTube.

Sansar is built and structured so that the individual experiences are like (using our metaphor above) individual videos on YouTube. They are entry points which the creators themselves can market to a specific audience.

Linden Lab plans to give Sansar creators a number of options to monetize their content. For one, consumers will be able to buy 3D models to customize their own virtual spaces and avatars. It’ll be possible for creators to charge entry fees to particular experiences. There’s also expected to be options for membership fees to access certain places. And the company is brainstorming more monetization options still, like the ability for consumers to pay money to a virtual object which would hold the money and pay it out to its own at regular intervals. That could open the door to functional objects that execute their function for a fee (in the real world you can imagine objects like that—arcade machines, pool table, washing machines, vending machines).

Linden Lab invited the first creators starting in 2015. As of 2017, the company has granted access to 1,000 creators, and says some 10,000 have signed up requesting access. So far the public is not allowed inside, and the company is keeping a tight wrap on what it actually looks like inside.

At the company’s offices this week, Altberg took me on a guided tour inside of Sansar. We both wore VR headsets (for which Sansar is built from the ground up) in separate rooms, complete with VR controllers. He took me to a number of virtual worlds made by third party creators, and though I couldn’t capture what I saw, I can tell you about it.

The graphics are actually quite good; this was a major criticism of Second Life, especially as it aged. Altberg, though, says that Sansar is architected in a number of different ways that will let its graphical capabilities scale more easily over time, whereas Second Life had design hurdles which prevented it from doing so. That’s important for a virtual world platform which hopes to be used by millions long into the future.

The first place I saw was a movie theater. A massive screen sat in a vast outdoor expanse with the night sky overhead. The seats in front of the screen were mostly covered over in windswept sand; as if there was once a huge theater that had deteriorated long ago, save for the screen, seats, and a huge flight of stairs leading down to them. The screen itself really felt massive (I’ve seen a number of other movie-theater VR experience that for some reason didn’t give a good sensation of scale). The screen was streaming a video from YouTube and the audio was playing throughout the entire space. Altberg said creators will soon be able to set virtual sound sources in Sansar so that the theater could have virtual speakers from which the sound originated.

Next was an Egyptian tomb which Altberg said was a real space that had been captured with photogrammetry. As we explored the tomb’s hieroglyphic-covered corridors together it became apparent that Sansar has 3D positional audio built it, allowing me to easily tell where Altberg was even when I wasn’t looking at him. That’s important not only because it helps your mind map the space and people around you more easily (which adds to immersion), but also because in multi-user scenarios, it’ll be much easier to tell who’s talking (which is also helped by automatic lip syncing).

The next space we visited was a beautiful world that looked like a mashup between the Ocarina of Time (1998) and Jackson’s Lord of the Rings aesthetic. It was a bright and cheery village full of green foliage and earthen homes built into the sides of hills; a series of small foot bridges arched across the roofs of one home to the next. The space was very dimensional, with little paths winding up hills here and there, taking us to comfortable nooks enclosed with trees. The space had a definite stylized videogame look to it, but even though it wasn’t aiming for realistic visuals, it was probably the most charming and beautiful place I saw during my tour. In the center of town we came across a big monument of a cutlass that was sticking tip-down into the ground. Water cascaded down from the handled in ordered lines, and poured into pools at the base of the monument. Although the entirety of this virtual space was uninhabited at this stage, it called out to be the starting point of a great adventure.

You’ll note from the descriptions that all three creations I saw varied widely from one to the next, both visually and functionally. Altberg said that Sansar aims to be “style agnostic,” leaving the creator largely unconstrained in terms of look and feel.

From my glimpse of Sansar, I’m very excited to see what creators will built on the Sansar platform. I know that the videogame space I described will be even better once it’s inhabited with other virtual users, and once it has some function beyond just being a charming virtual space. And that’s just one world among many that will be built atop Sansar.

Interested creators can sign up to get early access to Sansar today, and the rest of us will be able to step into these virtual worlds (and even try making our own) when the platform goes public in the first half of 2017.

The post ‘Sansar’ Will Open to All in First Half of 2017 with a New Approach to Virtual Worlds appeared first on Road to VR.

Get Free Courtside VR Seats to Spurs vs. Timberwolves NBA Game This Tuesday

Own Gear VR or Daydream View? Then you’ve got access to free courtside seats to this Tuesday’s forthcoming Timberwolves vs. Spurs NBA game live… in VR of course.

This Tuesday, January 17th, NextVR is offering a free pass to watch the San Antonio Spurs, currently holding the #2 position in the NBA Western Conference, tip off against the Minnesota Timberwolves.

spurs-vs-timberwolves-virtual-reality-nextvr
Photo courtesy NBA

NextVR specializes in virtual reality live broadcasting, and has worked with major sporting organizations like the NBA, NFL, NHL, Nascar, and more to put you in the midst of the action right from the comfort of your home. This season, the company agreed to broadcast some 20 live NBA games in VR, straight to the NextVR app, which is available on Gear VR and Google’s Daydream View headset. And while you’d typically need to drop $170 on the NBA League Pass to see the action live, the Spurs vs. Timberwolves game this Tuesday will be accessible for free.

Tune in to the NextVR app on your Gear VR or Daydream View headset on January 17th at 8:30PM ET to watch courtside, live in VR. Drop us a comment below to let us know who you’re rooting for.

SEE ALSO
Now with Netflix, HBO, and NextVR, Daydream Rises to Top VR Platform for Video

The post Get Free Courtside VR Seats to Spurs vs. Timberwolves NBA Game This Tuesday appeared first on Road to VR.

Eonite Claims to Have Solved Inside-out Positional Tracking for VR and AR

Lots of companies have made this claim actually, but Eonite specifically says they have the “world’s most accurate, lowest latency, lowest power consuming software to democratize inside-out positional tracking for VR and AR.”

Inside-out positional tracking—the ability to precisely determine where an object is in space using only sensors mounted on the device itself—has been an obvious need but an elusive challenge for the VR and AR industries. AR in particular requires such accurate and low latency tracking that virtual objects can appear fixed to the real world.

We’ve seen very impressive inside-out positional tracking before on Microsoft’s HoloLens—a $3,000 full-blown Windows PC on your head—which houses a bevy of sensors. But today Eonite is announcing their inside-out tracking solution, and claims that it supports low-latency, high accuracy headtracking using just commodity depth sensors and tablet-class processing power

eonite-vantage-head-tracker-(7)That’s potentially revolutionary for the VR and AR industry if true. And while we’ve haven’t gotten our hands on the tech just yet, Eonite has attracted the attention of Silicon Valley venture capitalists and angel investors who dropped $5.25 million on the company in a Seed investment in 2016. Among the investors is Presence Capital and The VR Fund, who specialize in VR & AR tech investing.

The company’s tech is not hardware, but general purpose software for achieving high performance inside out tracking. “It’s not a future promise. It works,” Eonite CEO Youssri Helmy said, speaking with Road to VR. He says Eonite’s tracking is capable of sub-millimeter accuracy and just 15 milliseconds of motion-to-photon latency; both under the threshold of what’s considered high enough performance for VR tracking.

eonite vantage head tracker (1)The company is calling the capabilities of the tracking ‘homescale’, to suggest that it can enable tracking across a multi-room, home-sized space, and is tuned to track well given the sort of objects you might find in a common home (furniture, shelves, doors, thin etc). Helmy says that the tracking tech integrates IMU and RGB data, and can work with “any depth sensing, from high def stereo, time-of-flight, rolling shutter, global shutter. Anything. The software doesn’t have much to do with the camera.” The software is also said to support both static and dynamic real-time obstacle detection for avoiding things like walls and pets.

Helmy says the tracking software is built on years of work on artificial perception in robotic and consumer applications by co-founders Dr. Anna Petrovskaya and Peter Varvak. “It’s the same core technology for tracking robots as tracking headsets,” he said. “The tech they had blew me away [when I first saw it].”

eonite-vantage-head-tracker-(5) eonite vantage head tracker (3)

But it isn’t just for tracking. Eonite is working on a Unity SDK which the company says will allow developers to bring real-time 3D scanned data from the user’s environment into the virtual world for mixed reality and AR applications, including support for persistent virtual content, shadows, and occlusion.

While the company is primarily pitching the tech for AR and VR tracking for now, it’s also said to be a solution for other industries like automotive, robotics, and manufacturing. The first product using the company’s tracking will launch in the first quarter of this year. Helmy says.

While Eonite’s technology sounds promising, 2016 saw demonstrations of major progress on inside-out tracking by a number of companies. First was the aforementioned HoloLens, followed by the impressive tracking of Qualcomm’s VR reference headset, along with Oculus in October who showed off highly functional inside out tracking on the Rift ‘Santa Cruz’ prototype. Inside out positional tracking is likely to be a dominant theme of AR and VR in 2017, and if truly solved by any of these players, will mark a major next step for the industry.

The post Eonite Claims to Have Solved Inside-out Positional Tracking for VR and AR appeared first on Road to VR.

‘Freedom Locomotion System’ is a Comprehensive Package for VR Movement

Moving around comfortably and immersively in VR remains a hurdle for VR game developers. VR studio Huge Robot has created the Freedom Locomotion System which brings together a number of VR movement systems into a comprehensive and functional package which allows for comfortable walking, running, and climbing in VR.

Since video games have existed, traversing great distances in large virtual worlds has been part of game design. In games like Halo, players run, drive, and fly across hundreds of virtual miles. But in VR, while driving and flying is usually pretty comfortable, running and walking often isn’t. So many developers have had to experiment and implement novel locomotion techniques for games which require traversal beyond the player’s available physical space.

freedom locomotion systemThere’s a bunch of different techniques out there. Many of them are completely comfortable, but not necessarily immersive. The common method of ‘blinking’ from one place to the next makes it hard to maintain a firm mental map of the space around you.

SEE ALSO
'Ninja Run' May Be the Craziest VR Locomotion Technique Yet

In an effort to tackle the challenge of comfortable and immersive VR locomotion, studio Huge Robot has created the Freedom Locomotion System, a comprehensive locomotion package that Director George Kong boldly believes is “as close to solving the issue of immersive VR locomotion as we can get within the current practical limitations of VR.”

caots-freedom-locomotion-systemThe system is underscored by what Kong calls CAOTS (Controller Assisted On the Spot) movement. It’s a sort of ‘run-in-place’ movement system of Huge Robot’s own design. Kong says it lets players comfortably and immersively move while leaving their hands free for interactions with the virtual world (especially important for games where you might regularly wield a weapon like a gun or sword).

In addition to CAOTS, the Freedom Locomotion System, also includes a number of subsystems which offer different modes of locomotion and methods of smart interactions between the player’s movement and the virtual world.

For instance, with the Freedom Locomotion System, players will move up or down in elevation along slopes and stairs if they walk along them in their physical space (instead of clipping through the geometry). There’s also a climbing system which detects ‘grabable’ geometry, providing a procedural way for making models climbable for players. There’s also a smart method for dealing with players clipping into walls and over edges. Kong offers a detailed breakdown of the package and its capabilities:

When combined with the CAOTS system, the VR movement provided by the Freedom Locomotion System looks intuitive and immersive. It isn’t clear yet if or how Huge Robot plans to distribute this system as a foundation for VR developers, but Kong says an extensive VR demo will be available soon on Steam and we’re excited to give it a try.

The post ‘Freedom Locomotion System’ is a Comprehensive Package for VR Movement appeared first on Road to VR.

Believe the Hype: HypeVR’s Volumetric Video Capture is a Glimpse at the Future of VR Video

After having teased the tech toward the end of last year, we’ve finally gone hands-on with HypeVR’s volumetric video captures which lets you move around inside of VR videos.

Inherent Limitations of 360 Video

Today’s most immersive VR video productions are shot in 360 degree video and 3D. Properly executed 360 3D video content can look quite good in VR (just take a look at some of the work from Felix & Paul Studios). But—assuming we can one day achieve retina-quality resolution and geometrically perfect stereoscopy—there’s a hurdle that 360 3D video content simply can’t surmount: movement inside of the video experience.

With any 360 video today (3D or otherwise) your view is locked to a single vantage point. Unlike real-time rendered VR games, you can’t walk around inside the video—let alone just lean in your chair and expect the scene to move accordingly. Not only is that less immersive, it’s also less comfortable; we’re are all constantly moving our heads slightly even when sitting still, and when the virtual view doesn’t line up with those movements, the world feels a less real and less comfortable.

Volumetric VR Video Capture

That’s one of a number of reasons that HypeVR is working on volumetric video capture technology. The idea is to capture not just a series of 360 pictures and string them together (like with traditional 360 cameras), but to capture the volumetric data of the scene for each frame so that when the world is played back, the information is available to enable the user to move inside the video.

At CES 2017, I saw both the original teaser video shot with HypeVR’s monster capture rig, and a brand new, even more vivid experience, created in conjunction with Intel.

With an Oculus Rift headset, I stepped into that new scene: a 30 second loop of a picturesque valley in lush Vietnam. I was standing on a rock on a tiny little island in the middle of a lake. Just beyond the rock the island was covered in lush wild grasses, and a few yards away from me was a grazing water buffalo and a farmer.

Surrounding me in the distance was rainforest foliage and an amazing array of waterfalls cascading down into the lake. Gentle waves rippled through the water and lapped the edge of my little island, pushing some of the wild grass at the water’s edge.

It was vivid and sharp—it felt more immersive than pretty much any 360 3D video I’ve ever seen through a headset, mostly because I was able to move around within the video, with proper parallax, in a roomscale area. It made me feel like I was actually standing there, in Vietnam, not just that my eyes alone had been transported. This is the experience we all want when we imagine VR video, and it’s where the medium needs to head in the future to becoming truly compelling.

Now, I’ve seen impressive photogrammetry VR experiences before, but photogrammetry requires someone to canvas a scene for hours, capturing it from every conceivable angle and then compiling all the photos together into a model. The results can be tremendous, but there’s no way to capture moving objects because you can’t capture the entire scene fast enough to record moving objects.

HypeVR’s approach is different, their rig sits static in a scene and captures it 60 times per second, using a combination of high-quality video capture and depth-mapping LiDAR. Later, the texture data from the video is fused with the depth data to create 60 volumetric ‘frames’ of the scene per second. That means you’ll be able to see waves moving or cars driving, but still maintain the volumetric data which gives users the ability to move within some portion of the capture.

hypevr-capture-rig-2The ‘frames’ in the case of volumetric video capture are actually real-time rendered 3D models of the scene which are playing back one after another. That not only allows the viewer to walk around within the space like they would a VR game environment, but is also the reason why HypeVR’s experiences look so sharp and immersive—every frame that’s rendered for the VR headset’s display is done so with optimal sampling of the available data and has geometrically correct 3D at every angle (not just a few 3D sweet spots, as with 360 3D video). This approach also means there’s no issues with off-horizon capture (as we too frequently see with 360 camera footage).

Continue Reading on Page 2 >>

The post Believe the Hype: HypeVR’s Volumetric Video Capture is a Glimpse at the Future of VR Video appeared first on Road to VR.

Lumus Packed a 55 Degree Field of View into Optics Less than 2mm Thick

At CES 2017, Lumus is demonstrating its latest waveguide optics which achieve a 55 degree field of view from optics less than 2mm thick, potentially enabling a truly glasses-sized augmented reality headset.

Israel-based Lumus has been working on transparent optics since 2000. The company has developed a unique form of waveguide technology which allows images to be projected through and from incredibly thin glass. The tech has been sold in various forms for military and other non-consumer applications for years.

But, riding the wave of interest in consumer adoption of virtual and augmented reality, Lumus recently announced $45 million in Series C venture capital to propel the company’s technology into the consumer landscape.

“Lumus is determined to deliver on the promise of the consumer AR market by offering a range of optical displays to several key segments,” Lumus CEO Ben Weinberger says.

img_82061This week at CES 2017, Lumus was showing off what they’re calling the Maximus, a new optical engine from the company with an impressive 55 degree field of view. For those of us used to the world of 90+ degree VR headsets, 55 degrees may sound small, but it’s actually been difficult to achieve that level of field of view in a highly compact optical system. Meta has a class-leading 90 degree field of view, but requires sizeable optics. Lumus’ 55 degree field of view comes from a sliver of glass less than 2mm thick. Crucially, you can also get your eyes very close to the Maximus optics, potentially enabling truly glasses-sized augmented reality headsets.

SEE ALSO
Hands-on: Meta 2 Could Do for Augmented Reality What Rift DK1 Did for Virtual Reality

Looking Through the Lens

Unlike some of the company’s other optical engines which were shown integrated into development kit products, the Maximus was mounted in place and offered no chance to see any sort of tracking (though Lumus primarily in the optical engine, not entire AR headsets).

Stepping up to the rig and looking inside, I saw an animated dragon flying through the air above the convention floor. The view was very sharp, and for an AR headset, felt like there’s some immersive potential. However, the contrast didn’t seem great, with bright white areas appearing blown out. The image also had a silvery holographic quality to it. This may mean a lack of dynamic range, or that the display was not adjusting for ambient light in this demonstration. The brightness of the Maximus optical engine seems among its strong qualities, as even without adding any dimming lenses to cut back on ambient light, the image was bright and clear. Ultimately I was very impressed by the capabilities of the Maximus optical engine. Assuming there’s no major flaws to the display system, this waveguide technology seems like it could be a foundation for extremely compact AR glasses, similar in size to regular spectacles (and that’s something the AR industry has been attempting to achieve for some time now).

dsc_0168The image I saw in the Maximus was 1080p, quite sharp at the 55 degree field of view, though Dr. Eli Glikman said that the resolution is limited only by the microdisplay that feeds the image to the optics. With a higher resolution microdisplay (such as Kopin’s new 2k x 2k model perhaps), there’s great opportunity to scale image fidelity here.

Glikman said that the Lumus Maximus still has about a year of R&D left before it’s ready to be productized, but says that partner companies this year will introduce product prototypes based on the Maximus.

Sleek Prototype

untitled-1To prove that the company’s optical engines are capable of enabling glasses-sized AR headsets, Lumus was also showing a prototype headset they called ‘Sleek‘. It uses some of the company’s other optical engines and has a smaller field of view, but it’s made to show the impressively small form factor that these optics make possible.

How it Works

lumus-maximus-optical-engineIt’s actually a pretty awesome feat of physics to channel light down a slim piece of glass and then get it to pop out of that glass when and where you need it.

The Maximus optical engine, as seen at CES 2017, relied on the bulky electronics above the optics. There, a pair of (microdisplays which function as the light source of the optics) are housed. The image from each display is stretched and compressed to be emitted along the top of the lenses. From here it cascades down the optics and—from our understanding of Lumus’ proprietary technology—uses an array of prism-like structures in the glass to bounce certain sections of the injected light out toward the user’s eye. During that process, the image is reconstructed into that originating on the microdisplay (somewhat like the process of pre-warping visuals to cancel out the warping of a headset’s lenses).

With Lumus’ advances in waveguide optics, coupled with other impressive microdisplay advances seen at CES this year, it seems that practical everyday solutions for lightweight augmented realist hardware are rapidly approaching. CES 2018 may prove to be a fascinating milestone for augmented reality.

The post Lumus Packed a 55 Degree Field of View into Optics Less than 2mm Thick appeared first on Road to VR.

Hands-on: Noitom’s Hi5 VR Glove Brings Compelling Finger Tracking to the Vive

VR input gloves are getting a big boost thanks to HTC’s newly revealed Vive Tracker. The combination of Noitom’s Hi5 VR glove and the Tracker forms a surprisingly compelling input experience that adds finger-level fidelity to Vive experiences.

We’ve seen plenty of VR gloves and other finger-tracking input methods over the years, and while a few of them proved quite functional, most had one flaw standing in the way of adoption. Usually that flaw was the tracking system, which was either not good enough or was a complex third-party approach which made the system unlikely to be adoptable for consumer use because of setup times and the difficulties with mixing and matching tracking technologies. The Vive Tracker, it seems, is about to change all of that.

SEE ALSO
Hands-on: HTC's New Vive Tracker Makes VR More Immersive With Specialized Accessories

Thanks to the Tracker, it’s become easy to precisely track third-party accessories in the same coordinate plane, with the same accuracy, and with the same latency as the VR headset itself, eliminating a host of issues in one fell swoop.

Functional and Practical

vive-tracker-and-accessories-12For Noitom’s Hi5 VR glove, that lead to a surprisingly compelling input experience on the Vive which feels more immersive for certain use-cases than the basic Vive controllers. After trying the gloves today at CES 2017, I came away feeling like I’d finally witnessed the right combination of finger tracking and motion input that could work in the consumer market.

One of the advantages of the Noitom Hi5 is the quick setup time. Unlike some other glove systems we’ve seen which also require bicep and even chest straps, the Hi5 is just the glove. Pull it on, synch the wrist strap and you’re good to go.

One thing I was initially concerned about was that the wrist-mounted Vive Tracker would wobble around and cause my virtual hand to wobble even when my real hand did not. Fortunately I found that the Tracker sat on my wrist close enough to the hand that the two were always rotationally in sync, and I was able to tighten the strap enough to prevent any wobbling.

Hands and Fingers Together

With the gloves and headset on, I immediately saw the responsive hand tracking provided by the Tracker, which taps into the same SteamVR Tracking tech as the headset. Beyond just hand-tracking, each of my fingers were individually tracked, offering full range of vertical movement. Horizontal finger movements didn’t appear to be tracked, through the use-cases for that sort of movement seem extremely slim.

With the Noitom Hi5 gloves, I felt able to fully articulate my hand in the virtual world, including giving thumbs up, pointing, and a certain lewd gesture with ease. Pointing, in particular, worked well enough that it became a more precise way to activate virtual buttons to make various selections. I was also able to gently tap a row of dominos with a satisfyingly precise flick of a finger.

A Challenge Remains

Initiating a “grab” with VR gloves is still a bit of an issue. Pinch gestures are sometimes employed to make it clear to the computer when you do or don’t intend to be grasping a virtual object, but using a pinch to pick up objects often feels unnatural. Some gloves use a “mock” grab gesture where you basically close your hand part way and pretend to be grabbing a real object when there isn’t actually one in your hands. This too feels awkward. Without the feedback of a real object to grip, the virtual grabbing interaction feels unsatisfying. Controllers, oddly enough, usually feel much more natural for virtual grabbing functionality because you have the feedback of something in your hand as you are grabbing the object.

noitom-hi5-vr-gloveInterestingly, the “fake” grabbing gesture on the Hi5 felt better than most gloves I’ve tried. So far as I can tell, this was a happy accident. The glove was a little big big for my hand, and, combined with the plastic underglove I was wearing (for sanitary demo purposes), the material would bunch up between my fingers as I went to close my hand into a gripping gesture. This offered some natural resistance against my grip which actually made the grabbing experience feel quite a bit more real and satisfying for me. I told the Noitom folks that they may want to chase that accident and see if there might be a good way to make it happen on purpose.

For now, the Hi5’s ‘grab detection’ felt relatively good, but still gave me some trouble. The computer isn’t always clear when you want to be holding an object and when you want to let it go. This meant I had a number of instances where I wanted to throw an object but it was left stuck to my hand until the next try. That might not seem like a big deal when the object is a ping pong ball, but when you’re in the middle of a competition VR multiplayer match and you go to throw that virtual grenade, only to find it stuck to your hand after you went to throw it… you’re going to be cursing the glove. Input needs to be 99.9% (if not 100%) accurate otherwise it will frustrate the user to no end. I often say: imagine if your mouse was 90% accurate… how frustrated would you be if every 10th click simply didn’t work? You’d probably throw that mouse out and buy a new one.

The grabbing gesture can certainly be defined and improved in software, but even then, it’s been a persistent challenge for most finger-tracked systems to find an all-encompassing approach to grab detection that works consistently for all the ways different users might attempt to grab virtual objects. It could be said that a more explicit grabbing gesture could be taught to users (like making a fist for instance), but in my mind the purpose of the glove is largely defeated if the user needs to be taught how to grab or throw an object differently than they would in real life—after all, the whole point of bringing your fingers into the virtual world is to make the experience more natural, not less.

– – — – –

Save for the inconsistent grab detection—and questions surrounding the awkwardness of pretending to grab something when there’s nothing actually in your hands—the Hi5 glove felt responsive and definitely make interactions with small physical objects in the virtual world more compelling. The addition of real finger pointing also opens the door to more nuanced interface input.

SEE ALSO
First Look at Valve's New VR Controller Prototype

With some tweaks, Noitom has something pretty cool on their hands here. Now that we know it works, the next question is price. The company hasn’t announced pricing yet, but says the glove is being positioned for the consumer market on the way to its Spring 2017 launch.

The post Hands-on: Noitom’s Hi5 VR Glove Brings Compelling Finger Tracking to the Vive appeared first on Road to VR.

Hands-on: HTC’s New Vive Tracker Makes VR More Immersive With Specialized Accessories

The Vive accessory market is about to get a huge boost thanks to the newly announced HTC Vive Tracker. Gaming, training, and more benefit from the enhanced immersion that comes from wielding “real” tools, weapons, and instruments.

Today I was a sniper, a firefighter, and a professional baseball player, all thanks to VR. I’ve actually done all of those things in VR before, but this time I actually held a real (mock) gun, a high-pressure hose-nozzle, and a regulation baseball bat, and the immersion was far greater than just pretending a controller was any of those things.

While motion controllers are great for in-home use and cover a wide range of general VR uses cases, there’s always going to be niche experiences that benefit from having the genuine article in your hands. For the most part, pistols are fine with a generic motion controller, but if you want to do virtual long range shooting, you’re going to want a proper rifle-shaped device so that you can hold it in the right position, look down the scope, and keep the stock to your shoulder.

vive-tracker-and-accessories-5 vive-tracker-and-accessories-3 vive-tracker-and-accessories-1 vive-tracker-and-accessories-2

Thankfully, HTC’s new Vive Tracker is about to make specialized VR accessories way easier to use for both consumers and out-of-home VR businesses. The self-contained device is tracked by the same system as the Vive headset and controllers, and can be easily attached to everyday objects or custom-built VR accessories. With an integrated battery and its own wireless connection to the host computer, the device not only tracks objects, but can also send information like button presses and trigger pulls to the computer. With all tracking and input unified into the same system that VR devs already know how to build for makes things easier all around.

vive-tracker-and-accessories-12 vive-tracker-and-accessories-11 vive-tracker-and-accessories-6

As a testament to the Vive Tracker’s breadth of uses, HTC today showed off the device with integrations across a huge range of different use-cases thanks to accessories from a number of partners. Everything from gloves to guns to bats, and even a “real” virtual camera were demonstrated.

Among a number of experiences which used the Vive Tracker with specialized accessories, here’s what it was like to be a sniper, a baseball player, and a fireman in VR.

VRsenal VR-15 and ‘The Nest’ – Sniper

vive-tracker-and-accessories-15VRsenal had their VR-15 gun controller and haptic backpack running with The Nest, a sniping game for the Vive. The VR-15 had formerly housed an entire HTC Vive controller, but has newly integrated the Vive Tracker into the gun. The rifle, which is built for out-of-home VR systems, is appropriately heavy and robust, and includes a trigger along with two joysticks on either side of the foregrip for interacting with the game (in The Nest, this was used to toggle zoom power).

vive-tracker-and-accessories-17 vive-tracker-and-accessories-14

This version of The Nest had an integrated 3D model of the VR-15 that was identical to the controller in my hands. Sniping enemies at a distance from the vantage point of a small, high window was a blast thanks to the realistic weapon, which allowed me to tilt my head down to get an angle on the gun’s virtual scope.

Unlike trying to use a two-handed weapon like a rifle with two disconnected VR controllers, it was easy to use my forehand for subtle adjustments before firing, and the weight of the gun meant I didn’t get that annoying shaking that can easily be seen in VR when a large virtual object is connected to a much smaller real object (like the Vive controller). When I finally squeezed the trigger, the haptic backpack I was wearing gave me a very satisfying rumble that added to the immersion.

Continue Reading on Page 2 >>

The post Hands-on: HTC’s New Vive Tracker Makes VR More Immersive With Specialized Accessories appeared first on Road to VR.

1,000 Vive Trackers to be Given Away to Developers Ahead of Launch

vive-tracker-featured-1HTC’s new Vive Tracker accessory was announced today, and while the company says it will launch to consumers in Q2 2017, ahead of that release HTC plans to give away at least 1,000 to VR developers.

At CES 2017 today, HTC announced their new Vive Tracker accessory, a standalone tracking device which is made to attach to real-world objects and third-party accessories. The company showed the Tracker in use with a wide range of third-party accessories.

SEE ALSO
Vive Tracker Enables a Bevy of Bats, Guns, Gloves, and Other Tracked Accessories

While one use-case is direct to consumer, HTC is also expecting the developer community to be a major part of the Tracker’s ecosystem, enabling companies to make trackable accessories without the need for more complex total integration with SteamVR tracking technology.

Like the ‘Vive Pre’—of which thousands were given away ahead of the Vive’s consumer launch—HTC said they’d give away at least 1,000 Vive Tracker units to developers ahead of the device’s Q2 consumer launch. Details on how developers can secure one will be made available at a later date, though it sounds like it will be an application-based process.

Pricing on the HTC Vive Tracker hasn’t yet been announced, though we suspect it will be around $100, putting it in line with the cost of the Vive controller which has similar components.

The post 1,000 Vive Trackers to be Given Away to Developers Ahead of Launch appeared first on Road to VR.

Vive Tracker Enables a Bevy of Bats, Guns, Gloves, and Other Tracked Accessories

Today at CES 2017 HTC announced the Vive Tracker, a standalone tracking module that’s designed to attach to anything to become tracked in the virtual world.

We heard at least as far back as the reveal of the HTC Vive headset back in 2015 that we might one day see a standalone ‘puck’ tracker using the tracking tech, and now it’s finally real. The Vive Tracker is a compact standalone tracking device made to bring real objects and third-party accessories into virtual reality.

vive-tracker-featured-1The Vive Tracker uses the same SteamVR Tracking technology that’s found on the Vive headset and controllers and is promised to have the same accuracy. The Tracker has an integrated battery, a microUSB port, and what appears to be a standard camera-type mounting screw. Like the Vive headset and Controllers, the Tracker has its own wireless connection to the computer. Thanks to the microUSB port, third-parties can send data about their accessories to the host computer, which means that button clicks, trigger pulls, and other events can be used without requiring a separate connection to the computer. With more than 20 tracking points on the device, the unit is designed to fit a wide range of uses while remaining compact.

At the accessory’s reveal today, HTC showed how it could be integrated into a wide range of third-party products. We saw everything from bats to guns to gloves and even a firehose simulator from partners who worked with HTC to make their accessories work with the Tracker.

SEE ALSO
Second-gen Lighthouse Chip Could Improve Tracking, Reduce Cost of HTC Vive 2

The Vive Tracker has not yet been priced, but given the similarity to the Vive controllers, we guess the Tracker will be priced around the same $130 mark. The Vive Tracker release date is in Q2 2017.

The post Vive Tracker Enables a Bevy of Bats, Guns, Gloves, and Other Tracked Accessories appeared first on Road to VR.