Ready Player Cix: How One Rogue is Revolutionizing Mixed Reality

Ready Player Cix: How One Rogue is Revolutionizing Mixed Reality

Several years ago, in a warm Los Angeles court room a young man stood with his hand on a bible. His right hand was raised and his mouth was repeating a solemn oath. Once he was finished speaking, his name would be officially changed. He walked into the court [real name redacted] but would leave freer, fresher and more focused with his brand new identity: Cix Liv.

A few months before that fateful day in court, Liv had his identity stolen. Financial institutions told him in no uncertain terms that he had one of two choices: freeze all of his accounts while they sort out the problems, or get a new identity. This second option was likely more of a joke than anything else, but Liv took it to heart. He decided to use this theft as a chance to reinvent himself, a chance to forge that identity he wanted not the one anyone else had chosen for him. Liv knew exactly where to find this new identity. He had been keeping it for years now in a world separate from our own — World of Warcraft.

Cix the Rogue on the World of Warcraft start screen

Cix was originally the name of a character from the immensely popular MMORPG World of Warcraft. Here was an identity that Liv had been pouring hours of time, intent and skill into for years. Cix was not a random name bestowed by well meaning parents. Nor did it carry a lifetime’s worth of memories and experiences, not all of them wonderful. Cix represented everything Liv wanted from his new persona: freedom, individuality and, most importantly, a personality that would guide his life in the real world going forward.

Human Cix

As we chat at the Upload offices in San Francisco, Liv explains to me how the rebranding of his life connects to his current work in virtual reality.

Originally from Minnesota, Liv reflects that “I always told my friends one day I would just get in my car, drive to San Francisco and start a company. Three years ago I pulled that trigger.”

Even before he changed his name “LIV was always a brand I was building since my early teens.” Today, the young company currently consists of Liv and his fellow co-founders: Pierre Faure (CTO) and AJ Shewki (CMO). Their team may be small but their goals are anything but. In this gold rush era of relatively cheap and easier to produce VR content, LIV has decided it is going to delve into the vastly more complex and expensive world of VR hardware. Their goal is to create “a full stack, deployable content creation platform consisting of custom hardware and software with one goal: to make Mixed Reality accessible to the masses.”

Mixed Reality is a term still in the process of being fully defined and contextualized. As the immersive industry grows and changes, the definition of MR will likely do the same. Today, MR is most often associated with the complicated process by which real life people can be overlayed into the digital world in order to create powerful visual representation of how a VR experience works.

For example, take a look at this video for Fruit Ninja VR:

It’s not bad by any means and it does what most VR videos do nowadays: shows you a first person perspective and highlights hand interactions as best it can.

Now take a look at this gif of the same game created using Liv’s unique MR platform.

By showing you a real human in action, digital situations can be understood and explained much more easily to those outside. MR is a powerful tool for the demonstration of VR, the problem is it’s very technical, very expensive and very specialized. Only a few studios in the world can pull something like what you saw above. LIV wants to change that. The team has created a simple, repeatable, portable MR studio that can be set up and deployed by just about anyone. It is named Cube.

The LIV Cube MR green screen fully deployed

LIV Cube is a “modular, seamless green screen designed from the ground up to capture studio quality Mixed Reality and experience room-scale Virtual Reality.” It measures 8x8x8 feet with a custom aluminum frame and weighs just 27 pounds. The entire thing can be set up in under an hour.

It takes more than a green screen to make MR run, however, and so joining LIV Cube on the front lines of mass-market MR is LIV Box and LIV Client. LIV Box is a custom-built computer designed by Liv himself. It is described by the company as a “future proofed, custom, hand assembled PC hardware pre-calibrated and configured to run the latest in VR experiences.”

The final piece of the puzzle is LIV Client. This is “software built to remove the incredibly complicated task of calibrating virtual cameras and capturing software to successfully run, record and live stream Mixed Reality.”

It’s not terribly difficult to set up a green screen or find a powerful computer if you know what you’r doing and are willing to commit time, money and patience to the task. What is complicated, often prohibitively so, is making sure MR works flawlessly every time. There is an insane amount of minute calibrations necessary to pull of a proper MR experience and for those without months of experience it’s simply too difficult to even attempt.

LIV Client, therefore, is the most valuable component of the entire LIV platform. With just a few clicks you can record reliable MR video or stream it to a live audience.

All together, the LIV system has the potential to revolutionize the way studios and corporations explain and demo their software to the world. Pre-orders for the LIV MR platform are beginning on March 30 and with a goal to begin the shipments in July.

Cix Liv changed his name in an attempt to seize control of his own identity in a world that wanted to define who he was. Now, he sees VR as a place where the rest of the world can do the same.

As he puts it, “in the digital world you choose your own identity and people don’t realize how powerful that is.”

Hopefully, with tech like this, they will soon.

Disclaimer: Cix Liv rents a floating desk at the Upload SF co-working space. His standing as a paying member had no influence on this article’s inception or its content. 

Tagged with: , , , ,

Quixel Discusses Upcoming Sci-fi Experience Homebound

This week Swedish developer Quixel is set to release its debut virtual reality (VR) experience Homebound for Oculus Rift and HTC Vive. A sci-fi adventure where players find themselves in space about to go through an astronauts worst nightmare, crashing into the Earth, they have to deal with a series of catastrophic events to survive. VRFocus sat down with developer Wiktor Öhman to discuss the title and its development.

How did Homebound initially come about?

“Well I was working on the environment’s, the main one of Homebound. I was creating that mainly as a marketing thing for Quixel and our tools, the textures and so on. But midway through creating that we were talking about how cool it would be to actually be able to explore it, its in space floating around just looking at things inspecting the materials and stuff up close.

“I had never developed for VR at all, or any games at all, or any scripting in my life. But shortly after I got started, being able to open doors and basic stuff like that, we realised it would be a really really cool experience, an actual proper experience with a story and so on, so it escalated very quickly from a showcase, fly through environment, to a proper experience and just went from there. And I’ve been developing for 8 months now. I’m the only person working on the game.”

Homebound logo

What was the inspiration behind Homebound? 

“The actual art style was inspired by real life such as NASA and Space X, the first they made was the chair, one of the chairs in one of the pods, that’s heavily inspired by Space X. It started out heavily inspired by Space X and then I just sort of mishmashed Space X with NASA and hard sci-fi stuff, and that laid the foundation for the art style. We’ve kept that throughout, even the later environments in the game.

“The actual gameplay narrative, most people say is heavily inspired by Gravity, but I never intended to do that it sort of just happened because it’s the same concept, I totally see where it comes from.”

What influenced your choice of platform support?

We were granted the Vive by Epic and their development grants, and that’s also the reason we chose to develop on Unreal Engine.

Is Homebound a short VR experience or a fully fledged videogame?

“It’s an experience that’s as long as we could make it for several reasons. It’s not a full game if you’re referring to a six or ten hour game, it’s more of an experience around thirty minutes or so. And the reason we kept it at that length was it just became too intense to keep it longer, there’s a lot of stuff happening all the time it’s going to be quite overwhelming with flashing lights and zero gravity, 360-degree freedom, there’s just so much going on all the time. And I noticed that if I played for longer than that, uninterrupted, I didn’t feel very well and most people have said the same, so I decided to keep it at a short but sweet length at around 30 minutes, which is also a good time for a casual pick up and play, when you just want to play some cool VR or want to show your friends, it’s a perfect length for an experience.”

Is there anything you’ve not included, that originally you wanted to?

“Regarding things that I wanted in but isn’t it, I can’t say what it is but there is something we’ve been wanting to get in since the beginning, that we’re still trying to get in which’ll be awesome, if I say so myself, but unfortunately I can’t mention what it is if we can’t get it in by release.

“We’ve got more in than we anticipated and that we planned for, it hasn’t been a matter of cutting content, it’s been like ‘yeah we should totally add this’, it’s been a very creative and inspiring experience to develop the game.”

How did you handle the control mechanics for Homebound, any issues with simulator sickness?

“We have a couple of different ways to control, you can use game pad or motion controls. You have an assisted turn system – its a seated experience – so if you look far enough to the left or right the camera sort of assists you in looking further to the left or right than you actually do. You use the triggers to go forwards and backwards, strafe left and right, you ascend and descend, both on the gamepad or motion controllers.

“I personally didn’t feel motion sickness nor did the in house testers. Once we started getting testers in, a couple of hundred testers, we started getting a high frequency of reports of motion sickness which was interesting, because its kind of hard to trouble shoot and test it since you don’t feel the symptoms yourself. So we had to make tweaks and we had to make a build and send it and get reports and adjust according to that. But the originally intended navigation control/layout is pretty much the same as before, its just been very very tweaked. We noticed that moving quickly, when your not moving yourself is a pretty big source of motion sickness so we we had to find a sweet spot for the speed.”

Are you planning to develop further VR projects?

“We definitely hope to be creating more, this is sort of testing the waters to see how it works. We’ve all developed games before, but this is the first time we’ve developed an indie game, this is an experiment we’re doing.”

Hideo Kojima: VR Will “Have A Large Impact On Our Culture”

Hideo Kojima: VR Will “Have A Large Impact On Our Culture”

Hideo Kojima is rarely hesitant when pushing the envelope in his creative endeavors and his resume (Metal Gear Solid, Zone of the Enders, P.T., Death Stranding) speaks for itself. He’s become so incredibly prolific that, not only does his work inspire a rabid fandom, the creator’s every word carries a great deal of weight with professionals throughout the industry. He meticulously manages many aspects of his games and it would be interesting to see what a mind such as his could bring to life with VR. In a new interview with Rolling Stone’s gaming vertical Glixel, Kojima spoke of his new gaming studio and also had quite a bit to say about virtual reality.

When it comes to VR, Kojima has gone from not seeming very impressed to expressing that he believes it will significantly change entertainment, education, and culture. This interview embodies the spirit of the latter, with him stating that VR is a “powerful medium that has the ability to change not only games but our lives”. He goes a step further by explaining how it could potentially shape things right down to how we order our food. “It’ll have a large impact on our culture at large.”

The more physically involved nature of VR, such as moving actively with headsets on our heads and controllers in hand, has resulted in a software ecosystem mostly filled with experiences that are intended to be short and sweet. With Resident Evil 7 being the most recent exception, it’s very rare that we get traditionally full-length gaming experiences. Kojima broke into Glixel’s question about him changing his stance on video game’s being considered art on the strength of VR’s immersion by comparing VR to a film, a medium that lives within one to three-hour formats.

“I believe that in the very near future, games and movies will meld together,” he says. “The main difference is that a movie is not interactive, whereas a game is. It’s almost like industrial design, where you need to think about the way many people will interact with a product and design it around that. That’s a big difference between movies and games.”

When asked if he thinks two-three hour games can be satisfying and memorable, he responded that he believed so. “Games right now, the main way of creating a large-scale game has been to spend three or more years that takes 100 hours to play or something like that. But I think games will also move in the same way toward an episodic nature, meaning smaller but released in a steady stream.”

Beyond his comments on VR, Kojima also dropped a hint on his potential utilization of augmented reality. “In a way it’s like AR,” he says about breaking the fourth wall in games, like when players had to find a codec frequency for Metal Gear Solid on the rear of the video game’s case. He said he doesn’t want to use the same tricks again so maybe, in the future, we’ll see Kojima add an entirely new layer of game you’ll have to see through AR glasses or on your smart phone.

A lot of Kojima’s anecdotes about the future of things like film and education breaking away from traditional frames are already a reality as creators are able to offer dynamic and immersive VR experiences that operate outside of the box. In August of last year, Kojima joined the advisory board at Prologue Immersive to focus on VR. Hopefully, we’ll see him take a more active role and treat gamers to a virtual reality experience in a way only he can deliver.

Tagged with: , , , ,

Luke Thompson On VR Sickness, Sigtrap Games’ Future Plans & What VR Must Do In 2017

At the end of last year VRFocus was in attendance at the Develop:VR, an event at which we saw some of the most entertaining, thrilling and intriguing uses of virtual reality (VR). From it’s use in video games to film and entertainment to even learning how to test an electrical circuit box safely. We also found some of the ways you shouldn’t implement VR.

Amongst the more interesting talks was that of Sigtrap Games Co-Founder Luke Thompson, who in his discussion “Techniques for Comfortable Movement in VR” described the hows, whys and wherefores of player movement’s potential to cause discomfort in VR. As well as some of the methods developers can implement in order to reduce the possibility of sim-sickness and bolster comfort levels.

Sigtrap Games Luke Thompson

After his session we took Thompson aside to discuss these topics further and also see what he thought 2017 has in store for the VR industry.

VRFocus: You were talking about sickness today- motion-sickness in VR. Are we any closer to getting this problem solved once and for all?

Luke Thompson, Sigtrap Games: [laughs] Ultimately I don’t think so. I mean – until you either have something that gets injected straight into your brain to sort of trick your vestibular system, or you have, you know, home devices where you literally move in one of those Lawnmower Man things – there are always going to be issues with it. It is, like I said, a fundamental mismatch between the different stimuli that your brain’s getting. So unless you can fake one of them sufficiently, even if you can sort of improve things to 99%, there’s always going to be the 1% of people who react badly to any particular technique.

So it’s a difficult one to ever say you’ve fully solved. We can sort of get closer and what really I think we should be aiming towards is a conclusive set of best practices, where we really understand what’s going wrong – not necessarily with a view to saying ‘we can solve all these problems’, but if we can understand them all and know how best to circumvent them, then that’s probably a more realistic goal, at least in the short term.

VRFocus: Okay. In terms of ways of preventing or minimalising the effect, though, I mean, is there one that stands out above all the others at the moment; would you say that?

Thompson: I, well, uh – with the caveat that this is all with given our experience, and-

VRFocus: Yes.

Thompson: -and again, it’s not going to apply to all games all the time, but certainly something, like I said in the talk, the best bang for buck is this sort of tunnelling, vignetting effect – where you restrict peripheral vision, based on the motion that’s happening in the game. Your brain gets a lot of its motion cues from that peripheral vision, and more so than it does from the centre of your vision, so, by restricting the information that it’s getting in that area of the eye, you can really do a lot to minimalise the amount of motion that your brain is trying to interpret. So, in terms of it being simple to implement, widely effective, and computationally extremely cheap, [there really is, like] it should be the first thing on any list of measures to implement.

VRFocus: So, 2016. It’s been quite a year, in many, many ways – but especially for VR. Do you think 2016’s was ‘The Year of VR’ as everyone has been terming it since the beginning of-

Thompson: No.

VRFocus: -you don’t think it has?

Thompson: No, no. Um, because, I think VR is going to, a few years from now, dwarf this year. I think we are, you know, what we’re seeing this year is the beginning of something. There are so many more things that we can do with VR that we haven’t figured out yet. Like, this year may provide the kernel of a lot of that. But this isn’t The Year of VR. This is The Year That VR Began. Right? Five years from now, there’s going to be so much more. We’re going to understand so much more. We’re going to be doing such exciting things with it, that I really think to call THIS The Year of VR would be to undersell the potential of VR.

VRFocus: In terms of what’s been done this year – I mean, you say about, the future will dwarf, I would say that 2016 certainly has dwarfed 2015-

Thompson: Mmhmm.

VRFocus: -as to what we’ve seen – what’s the most creative thing you’ve seen in VR this year?

Thompson: Creative…?

VRFocus: I mean, it could be anything, I know, but-

Thompson: No, that’s – that’s interesting.

VRFocus: But [VR has] been taken in so many different ways already. Is there anything that sort of sticks out in your mind as seeing something and going “wow, I wish I’d thought of that” or…?

Thompson: That’s tricky, actually. I mean, there’s been so much. I mean, one of the things that, again, one of the reasons that I feel like this year hasn’t been The Year of VR is because we haven’t really – you know, one of the things is that we haven’t figured out that killer app yet. We haven’t figured out, what is it that VR does that nothing else does? And we know those answers are there, and we’re starting to find them.

But, to really point out something that says, you know – I would like to be able to point out something where I could say “that has defined VR”, right? And that’s not something we can say here. Because we haven’t found that foothold yet. We know that those answers are there. We know the potential is there. We don’t know what the answers are yet. So, there has been a hell of a lot of awesome stuff this year. But I would say that most of it, for better or worse, has been within the confines of how we already understand games, rather than necessarily taking something to a new medium.

VRFocus: We’ve just translated what we knew from then, to what we have now.

Thompson: Exactly. And we’re starting to branch out from that, which is really exciting. And there are experiments that people have done. Just generally, the sort of things like – okay, here’s an example: Budget Cuts. The idea of using portals to move through the world, is really cool. Those things that you can, because – you’re toying about with, experiencing non-Euclidean geometry in a way that makes sense with your 3D understanding of the world, and that’s something you can’t do any other way. So that’s really cool. You have other things – do you know Unseen Diplomacy?

VRFocus: Yes, and funnily enough, I asked this question of somebody earlier and they said, “Unseen Diplomacy.”

Thompson: Yeah. Unseen Diplomacy is a really cool thing, and we’re actually working on something ourselves. So, we were working on something – and we’re still working on it – when we hadn’t actually heard of Unseen Diplomacy; and we were interested in a lot of the same things they were, and they were like “oh, [they’ve] already made this” – so it’s interesting seeing what sort of similarities. I can’t say too much about it, but we’re really excited about it. But one of the core things there is that social aspect of VR, and the fact that VR by its nature is very isolating; but thinking of cool ways around that and ways to even leverage that, to say, “I’m going to control what this other person can see in an interesting way”, and change how people communicate, with a local multiplayer setting; there’s some really cool opportunities there. And Unseen Diplomacy does that well! So. I suppose, if you wanted a concrete answer, those are maybe the ones to go for.

VRFocus: So. 2017. You mentioned this ‘other’ project, but what else is happening with Sigtrap Games? We’ve obviously got Sublevel Zero

Thompson: Yep. So, Sublevel Zero – I can’t put a date on it yet, but it will be coming out early next year, and we’ll be targeting [Oculus] Rift and [HTC] Vive for that. We’re really excited to get that out.

Sublevel Zero

We’ve already got a beta version of that on Steam and on GOG – for people who already own the game on there, they can opt into a beta and sort of see what we’re doing. A lot of the stuff we’re doing just right at the moment, because we are such a small team, we’re concentrating on the 2D console versions of the game, which are going to come out very early next year. And a lot of the stuff we’re doing on there, the optimisations in particular, really play back into the VR stuff. But it has taken a little bit of our time away from that, unfortunately. But we’ve got a lot of great ideas on what we’re going to be doing with that for release next year. We’re also, like I say, we’re working on that project that I’ve hinted at. We’re working on something else as well which we’re extremely excited about, and again-

VRFocus: Is that VR or non-VR?

Thompson: They’re both VR. So we’re not – we don’t see ourselves as an exclusively VR studio but, at the moment, the gameplay ideas that we want to explore are in VR; and ultimately, the reason for that is, what I was saying about not knowing what it is yet that VR can do that other mediums can’t. And that’s what we want to do! We want to do things in VR that you can’t do otherwise, really use-

VRFocus: So it’s not having those rules, and the freedom of creativity is opening the doors for other things.

Thompson: Exactly.

VRFocus: Again – if we discussed about what the future will bring, what’s the one thing that VR needs to do, above all else, in 2017?

Thompson: Well the obvious answer is wireless. That’s kind of the clear and present thing, getting rid of those wires and untethering you from this big brick of computational power. That’s very tricky to get right, but, you know, next year we might see things that do that well.

VRFocus: Have you tried the Santa Cruz, or any of the HTC adapters?

Thompson: I haven’t tried the wireless ones, unfortunately. I think it’s more likely that it’s going to go the way of Santa Cruz rather than the wireless add-ons for HTC Vive and things like that. The main reason there is, my primary concern is latency – if they can solve the latency problem with wireless, then that’s great – again, I haven’t tried it, so I don’t know how far along they are.

VRFocus: With the Vive, there’s also multiple separate entities as well. It’s not HTC themselves.

Thompson: Exactly. So you’re talking about things where all the different hardware can kinda get in each other’s way and you’ve gotta really optimise those things for it to be a good experience. So I suspect you’re going to see more things like Santa Cruz that sort of do the inside-out tracking and have the computational power actually attached to your head to begin with. Obviously, that’s not necessarily the way to go in the medium term – if you can get rid of the wireless latency problem, then you can pump a lot more data out of a computer than you can out of essentially two mobile phones strapped to your head. But I think in the short term it’s gonna happen. So, I think in terms of making VR get out to a wider audience, in terms of hardware, wireless is that thing that it really needs.

Standalone VR Oculus - 2 (Santa Cruz)

But the more subtle answer, I think, is a killer app. We need something that shows what VR can do that nothing else can do. That’s what gonna drive people to get involved with VR and buy it and try it out, and evangelise their friends. At the moment, it’s a cool piece of tech. But that’s for geeks like us, right? We’re like, “AWW, that’s cool, that’s cool! I’ll try that and I’ll spend two hours setting this thing up!” Like, it’s really nice that PSVR has given slightly more mainstream players a chance-

VRFocus: Ready access.

Thompson: -exactly, a chance to- and because they don’t care about the numbers, you know? They don’t care that the resolution’s slightly lower. They care about actually being able to do this without having to turn out their entire living room. So, the way that you actually make that apply to the mainstream is, do something spectacular in VR that can’t be done any other way and make people want to experience that. So ultimately, what we’re waiting on more than tech is content, and the design strategies and the design language that we’re lacking currently, that we’re just inheriting from regular video games.

Interview with Wikitude: new SDK & future of AR

Hi everybody,

let`s get back to down-to-earth AR for a bit. There a couple of good toolkits out there to use with your today`s consumer devices. Not everyone has AR glasses at his or her disposition or is willing to put them on during a fair or at work. One well-established player for mobile (but also smartglasses) is Wikitude. They just released their new version today. For the SDK, you can read the full changelog and spec info on their blog here.

But, I took the chance to let Andy Gstoll explain to me directly how they plan to impact the AR space with their new release. Andy Gstoll has been pioneering the mobile augmented reality space with Wikitude since 2010 and is Evangelist & Advisor of the company today, he is also the founder of Mixed Reality I/O. So, we talked about the SDK and AR in general. Let`s jump right in after their release video:

augmented.org: Hi Andy, thanks for taking your time to talk about your new release and AR! Always a pleasure.

Andy: Same here, thanks for having me, Toby!

Congratulations on the new release of the wikitude SDK. I had the chance to see it prior to release and know the specs, but could you briefly summarize: what do you think are the key technical break-throughs with version 6 – for the developers and through that also to the end-users?

AndyGstollblackwhite_small

The Wikitude SDK 6 is our very first SDK product enabling a mobile device to do what we as humans do countless times per day with highest precision: seeing in 3D. This means to understand the dimensions and depth properties of the physical environment around us in real time. After providing GEO AR technology since 2010 and 2D recognition and tracking since 2012, moving into the third dimension with our 3D instant tracking technology is a break through for us and of course our developer customer base. In a short while it will also be a breakthrough for consumers, once those SDK 6 powered apps get out there.

I’ve seen the Salzburg castle demo where you walk through the city and the U.F.O. floats above the river Salzach. How do you glue the position of an object to the real world? Would two different users – coming from different directions – see the U.F.O in the very same geo spot with relative orientation, i.e. the augmented object faces in the same direction in the real world for both?

The “glue” is our 3D instant tracking technology, which is based on an algorithm called SLAM in combination with some Wikitude secret sauce. Our 3D instant tracking is built to work in any “unknown” space, so the demo that you have seen would work anywhere and is not bound to Salzburg’s city center. However, positioning content based on a geo location, for example like Pokemon Go, is very easy to implement. Our GEO AR SDK would probably be best suited for that scenario instead or perhaps a combination of the two.

Could you elaborate a bit on the new feature instant tracking and what it might be able to enable?

The obvious uses cases are of course the visualisation of products in space. This could be furniture, appliances like a refrigerator or a washing machine. But it could also be animated 3D characters that would appear in front of you to perhaps tell you something or be part of a gaming concept. The technology has also great potential in the architecture industry, it can for example enable you to place a building on a piece of land. For enterprise, this could mean that you can visualise a piece of machinery in your new factory to demonstrate and test it in a greater context. But I am sure there will be apps built by our large developer community that even we were not able to think of.

The use cases you are describing are all good generic AR examples. As I understand it, instant tracking kicks in if you have no prior knowledge to your real space and no markers placed. But this could make exact and repeatable positioning impossible. If you e.g. need to overlay virtual parts on a machinery you would still need a known reference to begin with, right? Like in the video when the man examines the pipes and starts off at the top with a marker. How will instant tracking help out?

Thanks for bringing this up. We have to differentiate between slightly different use cases here and different types of 3D tracking solutions suitable for each. You are right, the 3D instant tracking is always most suitable when used in unknown spaces, rooms and environments. When actual recognition is required, for example a hydraulic pump model xyz, you would use our 3D object recognition technology, which we have introduced at Augmented World Expo in Berlin last October, mostly focussing on IoT uses cases. Referring to the man examining the pipes, this is yet another technology available through our new SDK 6 called “extended tracking”. After scanning a unique marker of your own creation and choice – which you can see in the video at the top left – the man examines the pipes without having to keep the marker in the field of view of the tablet giving him the freedom to examines the entire wall of pipes.

wikitude-sofa

(Note from augmented.org: This video shows their instant tracking. You can read more about their IoT approach here.)

We just had the examples of architecture or machinery, so let`s speak of more use cases: the press release specifically states indoor and outdoor usage. Let’s say, I build my university campus navigation that needs to bring me to the right building (outdoors) and then to the right meeting room in the dark basement (indoors). Is switching between tracking methods seamless and can it be used at the same time? How do I use it?

This first generation of our 3D instant tracking is optimised for smaller spaces. What you are describing I think would involve the mapping of very complex areas and structures such as the pathways of a university campus. To be honest, we have not fully tested this use case yet. What I can tell you is that it performs quite well in both light and also in darker environments, it cannot be completely dark of course as it is a vision based technology.

So, let´s talk a bit more about your tracking technology. Your team says to have improved the recognition quality heavily, especially in rough conditions. Do you think there is still room for more or did we reach the end with today’s handheld device’s sensors? Do you plan to support Google’s Tango or a similar technology in the near future to go beyond?

To answer the first part of your question which refers to our 2D tracking, yes, there is always room for improvement. However, our 2D tracking is a very mature product since we have been working on and improving it since 2012 already. I think it is not too self confident if I claim that it is the best 2D tracking in the market today. With regards to Google Tango support, we currently do not have the plan to support this niche platform. As you know there is only one consumer Tango device out there today which is the Lenovo Phab 2 Pro available in the US and a few other additional countries, hence the market share is less than 1% today. With ASUS and other OEMs there will be more coming this year, but it will be quite some time until we will have a significant base of market penetration making it worthwhile for developers to build on top of this platform. As long as this is the case, WIkitude will be focussing on the iPhone and Android powered smartphones out there by the billions today.

Everyday-AR in everyone’s pocket is still not there on a broad scale. If you count Pokémon, it had a short rise in 2016, but it’s still a niche for consumers. Do you agree? What do you think will push AR out there?

I agree to the extent that AR is still a niche for many people out there, but we are in the middle of changing exactly this as the three important puzzle pieces are coming together: hardware, software and content. Pokemon Go was of course a great example of what the potential of consumer AR is, but we will need more killer apps like this.

What do you think is missing?

The main challenge from a technology point of view is to recognise and track the three dimensional world around us all the time, flawlessly without any limitations. Wikitude 3D instant tracking technology is a big step forward but there are many challenges to be solved still, which will keep us and our competitors busy for some time.

Looking at competitors and partners…. hardware players that are more industry focussed are building their HMDs successfully for their clients, like Epson or DAQRI. Others who are also looking at consumers are preparing their launches of AR software and hardware – be it Microsoft with Holographic or Meta. Do you think AR glasses will bring the break-through?

Whether it’s standard mono-cam smartphones, Tango and “Tango-like” devices or HMDs as you mentioned above – all of them will have their place in the ecosystem. However, I do believe that devices like Hololens, ODG’s R9 and Magic Leap’s “magic device” will change everything in the mid- to longterm when they will become small and ergonomic enough to be worn by end consumers. The main advantage of these devices is of course that you do not need touch any displays with your hands and that they are always there in front of you with the potential to give you very context rich information and content as and when you need it.

Will you be on these platforms?

Wikitude is already on these kinds of devices, we have created customised and optimised SDK versions with our partners at ODG, Epson and Vuzix, which are available for purchase on our website now.

In the very beginning, I saw wikitude only as a nice GPS-tag viewfinder. Today we are at version 6 and it became a complete AR SDK. What will we be able to see in the near and far future with it? Could you give us a glimpse?

As indicated above, Wikitude is fully committed to augmenting the world around us. As the world as we know it comes in three dimensions, Wikitude will continue to push the envelop to provide the best 3D technology solutions, enabling developers to recognise and track objects, rooms, spaces and more. Different technology algorithms are needed for different scenarios today. We will not stop working until these different computer vision approaches can be merged into one, which is the ultimate goal.

That brings me to my next question – when do you think will we reach the positive point-of-no-return where everybody makes use of AR naturally?

This will be the case when the real and virtual worlds become indistinguishable from a technological and daily experience point of view.

Allright. So be it indistinguishable or obviously augmented – what do you think is the biggest chance for the world with AR technology?

My most favourite use case of AR is teleportation. I have been been living and working in many distant parts of the world over the last 20 years. When AR technology can render a high quality 3D model of my family members right next and merge with my immediate physical environment, even though they are a thousand miles away, I think it would make me and the millions of other people traveling out there all the time, very happy. If you are interested in reading a bit more about this topic, you may want to check out my recently published article on TechCrunch.

Great! Thanks for your answers.

Andy: My pleasure!


So, that’s it. I can sure relate to the teleportation concept that I long for very much, too. Currently I´m trying to get around it in AltspaceVR and other solutions. But a proper holographic buddy or family at my desk would be best. Well, seems like Wikitude is following their path well to enhance AR tech even further for mobile and currently available HUD glasses. I will sure check back to see what others make out of the new SDK features. If you want to read Andy’s Techcrunch article, it’s here. So, stay tuned for more AR soon to come as always!

– Toby.