Ready Player Cix: How One Rogue is Revolutionizing Mixed Reality

Ready Player Cix: How One Rogue is Revolutionizing Mixed Reality

Several years ago, in a warm Los Angeles court room a young man stood with his hand on a bible. His right hand was raised and his mouth was repeating a solemn oath. Once he was finished speaking, his name would be officially changed. He walked into the court [real name redacted] but would leave freer, fresher and more focused with his brand new identity: Cix Liv.

A few months before that fateful day in court, Liv had his identity stolen. Financial institutions told him in no uncertain terms that he had one of two choices: freeze all of his accounts while they sort out the problems, or get a new identity. This second option was likely more of a joke than anything else, but Liv took it to heart. He decided to use this theft as a chance to reinvent himself, a chance to forge that identity he wanted not the one anyone else had chosen for him. Liv knew exactly where to find this new identity. He had been keeping it for years now in a world separate from our own — World of Warcraft.

Cix the Rogue on the World of Warcraft start screen

Cix was originally the name of a character from the immensely popular MMORPG World of Warcraft. Here was an identity that Liv had been pouring hours of time, intent and skill into for years. Cix was not a random name bestowed by well meaning parents. Nor did it carry a lifetime’s worth of memories and experiences, not all of them wonderful. Cix represented everything Liv wanted from his new persona: freedom, individuality and, most importantly, a personality that would guide his life in the real world going forward.

Human Cix

As we chat at the Upload offices in San Francisco, Liv explains to me how the rebranding of his life connects to his current work in virtual reality.

Originally from Minnesota, Liv reflects that “I always told my friends one day I would just get in my car, drive to San Francisco and start a company. Three years ago I pulled that trigger.”

Even before he changed his name “LIV was always a brand I was building since my early teens.” Today, the young company currently consists of Liv and his fellow co-founders: Pierre Faure (CTO) and AJ Shewki (CMO). Their team may be small but their goals are anything but. In this gold rush era of relatively cheap and easier to produce VR content, LIV has decided it is going to delve into the vastly more complex and expensive world of VR hardware. Their goal is to create “a full stack, deployable content creation platform consisting of custom hardware and software with one goal: to make Mixed Reality accessible to the masses.”

Mixed Reality is a term still in the process of being fully defined and contextualized. As the immersive industry grows and changes, the definition of MR will likely do the same. Today, MR is most often associated with the complicated process by which real life people can be overlayed into the digital world in order to create powerful visual representation of how a VR experience works.

For example, take a look at this video for Fruit Ninja VR:

It’s not bad by any means and it does what most VR videos do nowadays: shows you a first person perspective and highlights hand interactions as best it can.

Now take a look at this gif of the same game created using Liv’s unique MR platform.

By showing you a real human in action, digital situations can be understood and explained much more easily to those outside. MR is a powerful tool for the demonstration of VR, the problem is it’s very technical, very expensive and very specialized. Only a few studios in the world can pull something like what you saw above. LIV wants to change that. The team has created a simple, repeatable, portable MR studio that can be set up and deployed by just about anyone. It is named Cube.

The LIV Cube MR green screen fully deployed

LIV Cube is a “modular, seamless green screen designed from the ground up to capture studio quality Mixed Reality and experience room-scale Virtual Reality.” It measures 8x8x8 feet with a custom aluminum frame and weighs just 27 pounds. The entire thing can be set up in under an hour.

It takes more than a green screen to make MR run, however, and so joining LIV Cube on the front lines of mass-market MR is LIV Box and LIV Client. LIV Box is a custom-built computer designed by Liv himself. It is described by the company as a “future proofed, custom, hand assembled PC hardware pre-calibrated and configured to run the latest in VR experiences.”

The final piece of the puzzle is LIV Client. This is “software built to remove the incredibly complicated task of calibrating virtual cameras and capturing software to successfully run, record and live stream Mixed Reality.”

It’s not terribly difficult to set up a green screen or find a powerful computer if you know what you’r doing and are willing to commit time, money and patience to the task. What is complicated, often prohibitively so, is making sure MR works flawlessly every time. There is an insane amount of minute calibrations necessary to pull of a proper MR experience and for those without months of experience it’s simply too difficult to even attempt.

LIV Client, therefore, is the most valuable component of the entire LIV platform. With just a few clicks you can record reliable MR video or stream it to a live audience.

All together, the LIV system has the potential to revolutionize the way studios and corporations explain and demo their software to the world. Pre-orders for the LIV MR platform are beginning on March 30 and with a goal to begin the shipments in July.

Cix Liv changed his name in an attempt to seize control of his own identity in a world that wanted to define who he was. Now, he sees VR as a place where the rest of the world can do the same.

As he puts it, “in the digital world you choose your own identity and people don’t realize how powerful that is.”

Hopefully, with tech like this, they will soon.

Disclaimer: Cix Liv rents a floating desk at the Upload SF co-working space. His standing as a paying member had no influence on this article’s inception or its content. 

Tagged with: , , , ,

Man Reports Sensation In Missing Fingers Using Oculus Touch

Man Reports Sensation In Missing Fingers Using Oculus Touch

Bob Murphy was born without three fingers. The pinky, ring and middle digits on his right hand end around the first knuckle. Murphy is the owner of an Oculus Rift virtual reality headset and he recently obtained the Oculus Touch hand controller system as well. According to Murphy, he acquired Touch “with the sole purpose to see what it’s like to have ten fingers.”

With Touch, the Rift represents your hands as ghostly floating 3D approximations. You can make a fist, do a thumbs up, and point using the controller’s capacitive sensors.

“My brain…only understands having one full hand and one less than full hand,” Murphy explained in an interview with UploadVR. “But when you have an experience that makes your hands actually look like hands, that’s what triggered me.”

Murphy says he’s not entirely sure which experience exactly he was using when the first sensations were achieved, he just remembers how it felt.

“It wasn’t something I expected to happen and my brain kind of internalized it the whole time. My brain kept trying to figure out what was up and I started feeling what can only be described as tingling sensations from parts of my fingers that I’ve never had,” he said.

The closest approximation Murphy could come up with for the feelings he was getting with Touch was the phantom limb sensation that amputees sometimes report. However, in Murphy’s case he never had these sensations to begin with.

We reached out to Dr. Arshya Vahabzadeh, a physician and former psychiatry instructor at Harvard Medical School. Vahabzadeh is working at a VR/AR startup called brainpower that studies the way our brains can be influenced and our lives improved using immersive technology.

According to Dr. Vahabzadeh:

We know that people who are born with missing limbs may experience a range of phantom limb sensations. This may suggest that representations of our limbs may be hard-wired into our brains. It has been suggested that in individuals who are born with missing limbs, feelings of phantom movements may be modulated by parts of their brain that deal with sight.

We do know that our brains are very neuroplastic, meaning they are able to reorganize in response to experiences. Virtual reality is already being used to help people with amputations who are experiencing phantom limb pain. While the research literature is limited, virtual reality may very well be able to modulate a range of other phantom limb sensations.

More cases like Murphy’s could one day prove out these ideas. For now, Murphy is excited to keep experimenting with different experiences and feeling something he never thought was possible.

Tagged with: , , , , , , , ,

Coming up next!

Hi everybody,

the year is getting warmed up for more AR and it’s time to plan your trips and bug your management (or yourself) for travel budget. As a guide I thought I’ll once more sum up upcoming AR/VR-related conferences today to get started! If you are in London, you are quite lucky again. So, here we go:

First overview of AR/VR conferences in 2017

Mobile World Congress, Barcelona, Spain, 27.2.-2.3.
Mobile World typically referred to smartphones, but this expands to wearables, possibly even smart AR glasses. The conference could reveal more AR-enabled devices, possibly we get a shipping date for the Zenfone AR or see other Tango phones in the wild?

Wearable Tech / AR Show / MXR Summit, London, UK, 7.3.-8.3.
“I find mixed reality much more exciting than VR. It doesn’t take you out of this world. Instead, it adds elements to our real world. And it has great flexibility.”, Peter Jackson as quoted by MXR Summit. Well, dam sure agreed. The triplet of conferences on wearables, IOT, AR, VR and more is a good moment to stop by in UK’s capital.

Eurographics, Lyon, France, 24.4.-28.4.
Eurographics is not strictly AR-connected, but if you are interested in 3D graphic developments and new research in this field you should check their agenda. After all, AR is highly visual and hungry for more performance, provided by those guys.

FMX Conference, Stuttgart, Germany, 2.5.-5.5.
The visual effects conference with high class speakers from the movie and CGI business all over the world is held every year in Stuttgart. Last year they had quite a big amount of AR and VR related talks and demos. You can read some of my coverage here. If you are closer to media production, worth a visit!

VRWorld, London, UK, 16.5.-17.5.
With their tagline “The Business of Augmented, Mixed and Virtual Reality” they make it clear: the huge number of 150 speakers will cover business stories of things and present work in progress and allow for good networking. augmented.org is media partner for a reason.

Augmented World Expo, St. Clara, CA, USA, 31.05. – 02.06.
Classic, well established and huge AR conference. If you are enjoying the west coast in May it’s sure worth a visit to meet quite a number of AR veterans there and to get updated first-handed.

VR AR World, London, UK, 13.06.-15.06.
If you are in Europe in Summer you could check out different growing conferences. E.g. the VR AR World is now in it’s second year and attracts more and more business people. Co-located to the TechXLR8 event.

Digility, Cologne, Germany, 5.7.-6.7.
Personally, I really enjoyed the first Digility in 2016. Not only as a media partner and with the co-located Gamescom in beautiful Cologne, but it was a great mixture of tech talk and philosophical approaches to the whole metaverse. Great speakers gave you the chance to look beyond your own nose and learn something new.

Augmented World Expo Europe, October?, Some City?
Last year the European twin to AWE USA was held in Berlin. I’m pretty sure we can expect something coming up in fall 2017, again! ;-) So, keep some room in your calendar.

ISMAR, Nantes, France, 9.10.-13.10.
THE classic AR conference of all. augmented.org has been media partner and attendee multiple times and this year you will get the chance to learn more about current research on AR in France. The international symposium on mixed and augmented reality is pre-business stage. So, don’t expect a marketing show, but rather deep dive techy nerd stuff!

The Tech Expo, London, UK, 16.10.-17.10.
The Tech Expo covers more than AR. They have a blend of five tracks covering AR, VR, Fin Tech, Contextual, Emerging Tech and IoT topics. Check back to their page when they update the schedule.

If you miss an important event – let me know! Happy to update.

PS. No, the image is not a hint on an upcoming Apple AR device, I just felt free to re-use metaio’s conference image. ;-)

Cheers,
TOBY.

Oculus Vows Appeal of $500 Million Verdict, ZeniMax Threatens Injunction

Oculus Vows Appeal of $500 Million Verdict, ZeniMax Threatens Injunction

The legal battle between ZeniMax Media and Oculus VR has a verdict from the jury. In the first of many questions put to the jury, they decided Oculus did not misappropriate trade secrets.

The jury, however, also decided that Oculus co-founder Palmer Luckey failed to comply with a non-disclosure agreement he signed, as did Oculus by extension. Oculus and its co-founders Luckey and Iribe were found to owe ZeniMax $500 million as a result of copyright infringement and “false designation.” We’ve uploaded the full 90-page document the jury filled out here.

Oculus released a statement vowing an appeal:

The heart of this case was about whether Oculus stole ZeniMax’s trade secrets, and the jury found decisively in our favor. We’re obviously disappointed by a few other aspects of today’s verdict, but we are undeterred. Oculus products are built with Oculus technology. Our commitment to the long-term success of VR remains the same, and the entire team will continue the work they’ve done since day one – developing VR technology that will transform the way people interact and communicate.  We look forward to filing our appeal and eventually putting this litigation behind us.

ZeniMax released a statement as well threatening an injunction:

We are pleased that the jury in our case in the U.S. District Court in Dallas has awarded ZeniMax $500,000,000 for Defendants’ unlawful infringement of our copyrights and trademarks, and for the violation of our non-disclosure agreement with Oculus pursuant to which we shared breakthrough VR technology that we had developed and that we exclusively own.  In addition, the jury upheld our complaint regarding the theft by John Carmack of RAGE source code and thousands of electronic files on a USB storage device which contained ZeniMax VR technology. While we regret we had to litigate in order to vindicate our rights, it was necessary to take a stand against companies that engage in illegal activity in their desire to get control of new, valuable technology.

The liability of Defendants was established by uncontradicted evidence presented by ZeniMax, including (i) the breakthrough in VR technology occurred in March 2012 at id Software through the research efforts of our former employee John Carmack (work that ZeniMax owns) before we ever had contact with the other defendants; (ii) we shared this VR technology with the defendants under a non-disclosure agreement that expressly stated all the technology was owned by ZeniMax; (iii) the four founders of Oculus had no expertise or even backgrounds in VR—other than Palmer Luckey who could not code the software that was the key to solving the issues of VR; (iv) there was a documented stream of computer code and other technical assistance flowing from ZeniMax to Oculus over the next 6 months; (v) Oculus in writing acknowledged getting critical source code from ZeniMax; (vi) Carmack intentionally destroyed data on his computer after he got notice of this litigation and right after he researched on Google how to wipe a hard drive—and data on other Oculus computers and USB storage devices were similarly deleted (as determined by a court-appointed, independent expert in computer forensics);  (vii) when he quit id Software, Carmack admitted he secretly downloaded and stole over 10,000 documents from ZeniMax on a USB storage device, as well as the entire source code to RAGE  and the id tech® 5 engine —which Carmack uploaded to his Oculus computer; (viii) Carmack filed an affidavit which the court’s expert said was false in denying the destruction of evidence; and (ix) Facebook’s lawyers made representations to the court about those same Oculus computers which the court’s expert said were inaccurate. Oculus’ response in this case that it didn’t use any code or other assistance it received from ZeniMax was not credible, and is contradicted by the testimony of Oculus programmers (who admitted cutting and pasting ZeniMax code into the Oculus SDK), as well as by expert testimony.

We will consider what further steps we need to take to ensure there will be no ongoing use of our misappropriated technology, including by seeking an injunction to restrain Oculus and Facebook from their ongoing use of computer code that the jury found infringed ZeniMax’s copyrights.

ZeniMax CEO Robert A. Altman also released a statement:

Technology is the foundation of our business and we consider the theft of our intellectual property to be a serious matter. We appreciate the jury’s finding against the defendants, and the award of half a billion dollars in damages for those serious violations.

ZeniMax first accused Oculus of theft of its technology shortly after the announcement that Facebook was to acquire the startup back in 2014 for $2 billion (now thought to be $3 billion). It claimed that the Oculus Rift VR headset was built using its own technology and that John Carmack, the legendary game developer formerly of ZeniMax-owned id Software, had used its resources to offer essential help in developing the Rift.

Carmack does have ties to the Rift dating back to 2012, when creator Palmer Luckey sent him an early prototype that Carmack would demonstrate at that year’s E3 running id-made Doom 3: BFG Edition. The help Carmack offered Oculus during this time and the dispute of what code was made available to Oculus made up much of the trial.

The verdict comes after a trial that saw figureheads at Facebook, Oculus, and ZeniMax take the stand. Most notably, Mark Zuckerberg, CEO of Facebook, answered questions. He stated that “Oculus products are based on Oculus technology.”

We also saw the emergence of Luckey, around four months after he fell off the radar following the revelation he had helped fund a political propaganda campaign. Luckey was insistent that he had built the Rift on his own, despite claims from ZeniMax that he lacked the knowledge to do so. Former Oculus CEO Brendan Iribe also gave his account of the origins of the company, and Carmack himself took to the stand to defend himself.

Garrett Glass is a freelance writer based in Texas. He spent the last few weeks following the case for UploadVR.

Tagged with: ,

First Look: Basemark’s ‘VRScore’ Benchmark Arrives With a Unique Solution for Accurate Testing

Basemark has officially launched its virtual reality performance benchmark, VRScore and the package comes complete with an ingenious, low-cost hardware assisted solution to help ensure results are accurate.

I’ve written previously about how tough a problem benchmarking virtual reality hardware and software accurately really is. VR’s tight integration between hardware, application and drivers mixed with platform provider’s varying approaches to performance optimisations, means getting at numbers are indicative or even useful is extremely difficult.

We took a look at Futuremark’s early solution to gathering motion to photon performance which featured a bewildering and expensive array of light sensitive sensor and oscilloscope, an effort to try to close the loop on the latency between application rendering a scene and the user inside VR experiencing it. The solution was, perhaps unsurprisingly, abandoned with Futuremark instead recommending their VRMark software be used largely experientially between hardware setups in order to gauge relative performance. A solution which leaves much to be desired from an analytical stand point.

VRScore-01

SEE ALSO
Futuremark Explains Why VR Benchmarking is About More Than Just Numbers

Enter Basemark, a Helsinki based company who have made a name for themselves across a wide array of performance measurement solutions. They’re perhaps best known however for their widely used Basemark X, their solution to measuring graphics performance on mobile devices. Their new solution is called VRScore and it promises to provide an analytical approach to measuring the performance or your PC hardware and chosen VR headset.

The VRScore package comprises the base performance testing software itself (which comes in both DX11 and 12 guises), and the VRTrek – a device which features optical sensors and is hooked up to your PC’s soundcard Mic-in socket.

The software itself is broken into three sections:

System Test: Purely a test of your PC’s rendering grunt, this runs the Crytek developed ‘Sky Harbor’ VR experience, itself running on the company’s CryEngine software, and measures the resulting frame rate throughout the experience. The benchmark is rendered in stereoscopic 3D and uses pre-baked head motions, presumably recorded from an in-house playtest, to simulate VR headset usage. Each frame from the experience is rendered sequentially as quickly as possible and that sequence run multiple times. The faster you complete the loop, the faster your PC is. This clearly does nothing to gauge performance in relation to your VR headset, for that you’ll need to move to the full test.

vrscore-skyharbor-system-test-1
The VRScore ‘Sky Harbor’ experience running in System Test mode

VRTrek Test: This is the more interesting of the two benchmark modes as it leverages Basemark’s unique selling point, the VRTrek sensor device. VRTrek tackles a difficult problem (measuring the latency between images being rendered at the PC and displayed on the VR headset) rather elegantly. This benchmark runs the same looped experience as the System Test above, but this time displayed through the headset, ready for latency measurement.

The VRTrek device contains two photosensitive sensors mounted on a height adjustable perspex stand. Tweak the height of the sensor for your VR headset and position them to sit in front of each lens. Then you simply attach the single audio cable to an enabled Mic 3.5mm jack on your PC’s sound card and you’re ready to go.

VRTrek-stand-cableWhy the sound card hook up? Well this is the clever part. The Trek plugs into a PC via that 3.5mm jack (via a spare Microphone-In port); using the PC to coordinate timed flashes on the VR headset’s display, the Trek then sends analogue measurements of brightness right through the microphone port. The VRScore benchmark takes these readings and compares timing of the flash command and the actual flash to determine latency. The device allows the VRScore benchmark to detect dropped frames, frame latency and even duplicate frames received by the VR headset.

vrscore treak basemark vr latency testing (1)

Here’s a brief breakdown of technical specifications for the VRTrek device:

Parameter Rating
Spectral range of sensitivity Near infrared to deep blue / purple
Response time 8 μs
Field of view(FoW) 12°
Accuracy 0.2 ms
Precision <0,01 ms

VR Experience Mode: Simply gives you the chance to try out the Sky Harbor benchmark experience for yourself and it is well worth doing so. Road to VR‘s Ben Lang was so impressed with VRScore‘s little demo after trying it at last year’s GDC, he remarked that Crytek had “accidentally made the most spectacular cinematic VR short I’ve ever seen”. While expectations and qualitative bars in VR experienced have been raised in the last year, Sky Harbor still remains an extremely impressive demonstration of how transportative VR can be, especially when built with the level of production design present here.

VRTrekTest_VRScore VRExperienceTest_VRScore SystemTest_VRScore

VRScore supports HTC Vive, Oculus Rift and OSVR’s HDK as of writing, for the purposes of this review we used an Oculus Rift.

Other features of note, are the ability to engage NVIDIA Multi-Resolution Shading and Lens Matched Shading, both extremely interesting technologies which aim to reduce GPU rendering burdens for better overall performance (currently DX11 only). You’re also able to run the benchmarks at non native HMD resolutions. We didn’t get the chance to test, but this looked to give the option to test super-sampling.

Alas, due to a shipping delay, our review VRTrek device hadn’t made it to us in time for this article, but we’ll be putting the system through its paces this week for some more detailed feedback. In the mean time, Basemark looks like to have struck a good balance between accessibility, spectacle and analytical credentials with VRScore, we’ll let you know if it performs as expected soon.

VRScore will be launching initially with its Corporate version, with both the Professional and Free versions following in Q2 2017. The differences between the editions are broken down on Basemark’s site here.


Disclosure: Basemark supplied Road to VR with a copy of VRScore and one VRTrek device for evaluation.

The post First Look: Basemark’s ‘VRScore’ Benchmark Arrives With a Unique Solution for Accurate Testing appeared first on Road to VR.

Interview with Wikitude: new SDK & future of AR

Hi everybody,

let`s get back to down-to-earth AR for a bit. There a couple of good toolkits out there to use with your today`s consumer devices. Not everyone has AR glasses at his or her disposition or is willing to put them on during a fair or at work. One well-established player for mobile (but also smartglasses) is Wikitude. They just released their new version today. For the SDK, you can read the full changelog and spec info on their blog here.

But, I took the chance to let Andy Gstoll explain to me directly how they plan to impact the AR space with their new release. Andy Gstoll has been pioneering the mobile augmented reality space with Wikitude since 2010 and is Evangelist & Advisor of the company today, he is also the founder of Mixed Reality I/O. So, we talked about the SDK and AR in general. Let`s jump right in after their release video:

augmented.org: Hi Andy, thanks for taking your time to talk about your new release and AR! Always a pleasure.

Andy: Same here, thanks for having me, Toby!

Congratulations on the new release of the wikitude SDK. I had the chance to see it prior to release and know the specs, but could you briefly summarize: what do you think are the key technical break-throughs with version 6 – for the developers and through that also to the end-users?

AndyGstollblackwhite_small

The Wikitude SDK 6 is our very first SDK product enabling a mobile device to do what we as humans do countless times per day with highest precision: seeing in 3D. This means to understand the dimensions and depth properties of the physical environment around us in real time. After providing GEO AR technology since 2010 and 2D recognition and tracking since 2012, moving into the third dimension with our 3D instant tracking technology is a break through for us and of course our developer customer base. In a short while it will also be a breakthrough for consumers, once those SDK 6 powered apps get out there.

I’ve seen the Salzburg castle demo where you walk through the city and the U.F.O. floats above the river Salzach. How do you glue the position of an object to the real world? Would two different users – coming from different directions – see the U.F.O in the very same geo spot with relative orientation, i.e. the augmented object faces in the same direction in the real world for both?

The “glue” is our 3D instant tracking technology, which is based on an algorithm called SLAM in combination with some Wikitude secret sauce. Our 3D instant tracking is built to work in any “unknown” space, so the demo that you have seen would work anywhere and is not bound to Salzburg’s city center. However, positioning content based on a geo location, for example like Pokemon Go, is very easy to implement. Our GEO AR SDK would probably be best suited for that scenario instead or perhaps a combination of the two.

Could you elaborate a bit on the new feature instant tracking and what it might be able to enable?

The obvious uses cases are of course the visualisation of products in space. This could be furniture, appliances like a refrigerator or a washing machine. But it could also be animated 3D characters that would appear in front of you to perhaps tell you something or be part of a gaming concept. The technology has also great potential in the architecture industry, it can for example enable you to place a building on a piece of land. For enterprise, this could mean that you can visualise a piece of machinery in your new factory to demonstrate and test it in a greater context. But I am sure there will be apps built by our large developer community that even we were not able to think of.

The use cases you are describing are all good generic AR examples. As I understand it, instant tracking kicks in if you have no prior knowledge to your real space and no markers placed. But this could make exact and repeatable positioning impossible. If you e.g. need to overlay virtual parts on a machinery you would still need a known reference to begin with, right? Like in the video when the man examines the pipes and starts off at the top with a marker. How will instant tracking help out?

Thanks for bringing this up. We have to differentiate between slightly different use cases here and different types of 3D tracking solutions suitable for each. You are right, the 3D instant tracking is always most suitable when used in unknown spaces, rooms and environments. When actual recognition is required, for example a hydraulic pump model xyz, you would use our 3D object recognition technology, which we have introduced at Augmented World Expo in Berlin last October, mostly focussing on IoT uses cases. Referring to the man examining the pipes, this is yet another technology available through our new SDK 6 called “extended tracking”. After scanning a unique marker of your own creation and choice – which you can see in the video at the top left – the man examines the pipes without having to keep the marker in the field of view of the tablet giving him the freedom to examines the entire wall of pipes.

wikitude-sofa

(Note from augmented.org: This video shows their instant tracking. You can read more about their IoT approach here.)

We just had the examples of architecture or machinery, so let`s speak of more use cases: the press release specifically states indoor and outdoor usage. Let’s say, I build my university campus navigation that needs to bring me to the right building (outdoors) and then to the right meeting room in the dark basement (indoors). Is switching between tracking methods seamless and can it be used at the same time? How do I use it?

This first generation of our 3D instant tracking is optimised for smaller spaces. What you are describing I think would involve the mapping of very complex areas and structures such as the pathways of a university campus. To be honest, we have not fully tested this use case yet. What I can tell you is that it performs quite well in both light and also in darker environments, it cannot be completely dark of course as it is a vision based technology.

So, let´s talk a bit more about your tracking technology. Your team says to have improved the recognition quality heavily, especially in rough conditions. Do you think there is still room for more or did we reach the end with today’s handheld device’s sensors? Do you plan to support Google’s Tango or a similar technology in the near future to go beyond?

To answer the first part of your question which refers to our 2D tracking, yes, there is always room for improvement. However, our 2D tracking is a very mature product since we have been working on and improving it since 2012 already. I think it is not too self confident if I claim that it is the best 2D tracking in the market today. With regards to Google Tango support, we currently do not have the plan to support this niche platform. As you know there is only one consumer Tango device out there today which is the Lenovo Phab 2 Pro available in the US and a few other additional countries, hence the market share is less than 1% today. With ASUS and other OEMs there will be more coming this year, but it will be quite some time until we will have a significant base of market penetration making it worthwhile for developers to build on top of this platform. As long as this is the case, WIkitude will be focussing on the iPhone and Android powered smartphones out there by the billions today.

Everyday-AR in everyone’s pocket is still not there on a broad scale. If you count Pokémon, it had a short rise in 2016, but it’s still a niche for consumers. Do you agree? What do you think will push AR out there?

I agree to the extent that AR is still a niche for many people out there, but we are in the middle of changing exactly this as the three important puzzle pieces are coming together: hardware, software and content. Pokemon Go was of course a great example of what the potential of consumer AR is, but we will need more killer apps like this.

What do you think is missing?

The main challenge from a technology point of view is to recognise and track the three dimensional world around us all the time, flawlessly without any limitations. Wikitude 3D instant tracking technology is a big step forward but there are many challenges to be solved still, which will keep us and our competitors busy for some time.

Looking at competitors and partners…. hardware players that are more industry focussed are building their HMDs successfully for their clients, like Epson or DAQRI. Others who are also looking at consumers are preparing their launches of AR software and hardware – be it Microsoft with Holographic or Meta. Do you think AR glasses will bring the break-through?

Whether it’s standard mono-cam smartphones, Tango and “Tango-like” devices or HMDs as you mentioned above – all of them will have their place in the ecosystem. However, I do believe that devices like Hololens, ODG’s R9 and Magic Leap’s “magic device” will change everything in the mid- to longterm when they will become small and ergonomic enough to be worn by end consumers. The main advantage of these devices is of course that you do not need touch any displays with your hands and that they are always there in front of you with the potential to give you very context rich information and content as and when you need it.

Will you be on these platforms?

Wikitude is already on these kinds of devices, we have created customised and optimised SDK versions with our partners at ODG, Epson and Vuzix, which are available for purchase on our website now.

In the very beginning, I saw wikitude only as a nice GPS-tag viewfinder. Today we are at version 6 and it became a complete AR SDK. What will we be able to see in the near and far future with it? Could you give us a glimpse?

As indicated above, Wikitude is fully committed to augmenting the world around us. As the world as we know it comes in three dimensions, Wikitude will continue to push the envelop to provide the best 3D technology solutions, enabling developers to recognise and track objects, rooms, spaces and more. Different technology algorithms are needed for different scenarios today. We will not stop working until these different computer vision approaches can be merged into one, which is the ultimate goal.

That brings me to my next question – when do you think will we reach the positive point-of-no-return where everybody makes use of AR naturally?

This will be the case when the real and virtual worlds become indistinguishable from a technological and daily experience point of view.

Allright. So be it indistinguishable or obviously augmented – what do you think is the biggest chance for the world with AR technology?

My most favourite use case of AR is teleportation. I have been been living and working in many distant parts of the world over the last 20 years. When AR technology can render a high quality 3D model of my family members right next and merge with my immediate physical environment, even though they are a thousand miles away, I think it would make me and the millions of other people traveling out there all the time, very happy. If you are interested in reading a bit more about this topic, you may want to check out my recently published article on TechCrunch.

Great! Thanks for your answers.

Andy: My pleasure!


So, that’s it. I can sure relate to the teleportation concept that I long for very much, too. Currently I´m trying to get around it in AltspaceVR and other solutions. But a proper holographic buddy or family at my desk would be best. Well, seems like Wikitude is following their path well to enhance AR tech even further for mobile and currently available HUD glasses. I will sure check back to see what others make out of the new SDK features. If you want to read Andy’s Techcrunch article, it’s here. So, stay tuned for more AR soon to come as always!

– Toby.

Pimax 8K VR Headset Not Yet Ready For Primetime, But 4k Model Impresses

Pimax 8K VR Headset Not Yet Ready For Primetime, But 4k Model Impresses

Several weeks ago, we published a story about a company called Pimax. The scrappy startup was announcing its CES 2017 lineup and this year that included an 8K (4K per-eye) resolution, 200 degree field of view virtual reality headset. The implications of an HMD that powerful would be highly disruptive to the current VR industry, where resolution and FOV are two of the most difficult problems to solve and scale. We had the chance to try the Pimax headset for ourselves on the show floor at CES and what we found was a device that did have promise, but fell far below the mark of being a Vive or Rift-killer like we were promised.

Form Factor

The Pimax 8K is wide. It’s much wider than any other VR headset on the market today and is reminiscent of the Star VR headset that has moved quietly underground for some time now. The outer limits of the headsets horizontal plane extend a few inches past your head on either side and the overall visor curves in at an angle towards a central point. This makes the entire unit much more curved than either the Rift, Vive, or PSVR. The emphasis here is clearly on prioritizing that big FOV over a sleek or overtly comfortable design.

Fit

The 8K headset fits fairly well on the face and is surprisingly light for such a large headset. The ergonomics are sacrificed a bit however in favor of a wider FOV. The weight all seems to rest on the bridge of your nose and the end result is a new red mark to rival the infamous Oculus oval. The 8K is still in active development, however, so comfort levels could still be increased.

Performance

This is the big question: does the Pimax 8K deliver on the promise of a revolutionized display with industry changing resolution? The short answer is: no.

The long answer is that the Pimax HMD has a lot of promise but stumbles in a few unforgivable areas. The first is a complete lack of positional tracking. The Pimax representatives on site assured me that positional tracking would be added in by this spring, but for now you’re limited to head tracking only which is a far cry below what it will take to unseat the current kings of the VR hill.

The second problem is brightness. All the pixel density in the world doesn’t amount to much if they can’t be properly illuminated. The Pimax undoubtedly has the largest FOV of any VR headset I’ve ever tried, and there was some extra crispness to the image from what I could tell, but its screen was simply too dark to enjoy any of those innovations. The Pimax team took my findings to heart and said that a brighter backlight is expected to be incorporated into their next prototype.

Finally, Pimax headsets do not use OLED displays, instead they have chosen to use software algorithms to aggressively optimize more common LCD screens. They call this technique “brainwarp” and it does work. The LCD images moved with my head movements with less latency than one would expect with little to no bloom distortion (pixels that change color too slowly and cause a blurry image) at all.

Conclusion

The Pimax 8K headset was one of my great hopes for CES 2017 but, unfortunately, it’s still a bit too immature for the big leagues. The company will be launching a Kickstarter and raising additional funding soon and perhaps the extra capital can help them overcome some of these issues and create a more fully realized product. Until then, it’s like they say: if something seems too good to be true, it probably is.

Bonus: A New Challenger Appears

Also at the Pimax CES booth was their older, 4K model. This unit provides 2K resolution to each eye and also employs brainwarp software optimization. Whereas the 8K design was wide and bulky, the 4K edition was lightweight and very similar to the Oculus Rift’s form factor.

The LCD displays inside provided a crisper image than anything I’ve seen in VR personally. This model could have also benefited from a brighter backlight, and there was still no positional tracking, but I felt that the Pimax 4K gave me my best look yet at what a higher resolution future for VR could look like.

Pimax is an innovative and exciting company and while none of their products are ready to come out of the oven just yet, I for one am very excited to see what they can do with just a bit more cooking time.


Image Credits: Pimax, Golem, VRNerds

Tagged with: , , , , , , ,

Hands-On at CES With Intel’s Project Alloy Standalone VR Headset

Hands-On at CES With Intel’s Project Alloy Standalone VR Headset

Project Alloy from Intel is a prototype VR headset with important new features from one of the world’s most influential tech companies, and we’ve just tried the first hands-on demo of the hardware at CES.

Alloy sits in the same standalone category as the Santa Cruz prototype from Facebook’s Oculus, meaning the hardware you wear on your head includes not just the display, but the rendering and positional tracking technologies that are fundamental parts of making VR work. Unlike the Oculus Rift and HTC Vive, no outside hardware, sensors, or cameras are needed.

Developer kits for Alloy are already in the hands of Intel’s partners and the company expects it to be turned into a consumer product by the end of the year. It is a bit of a heated race for Intel, because Microsoft already announced partners working with the tracking technology it developed for HoloLens. This critical technology is a prize for Microsoft, developed over a number of years, and both Facebook and Google (along with many others) are racing to match it. Heading into an era of mixed reality, if Intel is to retain its position as a supplier of fundamental technology for a wide range of manufacturers, it needs Project Alloy and its tracking technology to be a solid platform upon which partners can build.

So how did it work? I am one of the only people in the world to have tried both Facebook’s prototype and Intel’s, so I have some perspective others don’t. That said, my time in each headset was extremely limited, the prototypes is in ongoing development, and my impressions are totally subjective. So keep that in mind as you read on.

Intel Merged Reality

I can’t make too many conclusive statements about Intel’s technology, except to say that it won’t deliver an experience that feels anything like the one depicted in the “merged reality” video below anytime soon.

Instead, the Project Alloy demo I experienced “drifted” considerably. If it had been me wearing the Project Alloy prototype in the video above, I would’ve walked into a door.

Intel said it has the technology to scan a room while the headset is on, but in my demo it had been scanned beforehand. This scanning process should allow software to dynamically mold itself to the physical geometry of the room.

CES 2017 VR And AR News Roundup: Everything You Might Have Missed (Updated)

CES 2017 VR And AR News Roundup: Everything You Might Have Missed (Updated)

The Consumer Electronics Show in Las Vegas, NV is traditionally one of the largest tech events of the entire year and CES 2017 is no different. Many of the biggest companies on the planet, from Samsung and HTC, to Facebook, Amazon, and Google, and even all the way to Ford, Mercedez, and countless startups, all gather in the desert at the start of the year to unveil the latest and greatest advancements in consumer electronics. There’s also usually a heavy dose of vaporware and eventually broken promises.

With so much happening and the news coming in so quickly, it’s easy to miss stories. At UploadVR we’re dedicated to addressing the VR and AR industries as thoroughly as we can, which often means a torrential downpour of coverage. It’s hard to keep up with.

So at the beginning and end of each day of CES (starting today) we will update this article with links to all of our CES news and headlines. For the full scoop, you’ll have to click through to the specific story of course because we don’t want this article ballooning to several thousand words that you may not care about. Consider this to be your consistently updated one-stop shop for all of the news from CES 2017 so far.

This story will be updated and republished as needed in order of most recent date first. All of Friday’s CES news is added down below!

Friday, January 6th

CES is now officially in full swing in Vegas, but the big news and announcements have all mostly happened at this point. We’ll be spending our time doing interviews, getting hands-on with some of the newest devices, and checking out all there is to see at the convention.

Intel’s ‘Merged Reality’ Demo Brings Actual Hands Into VR on Oculus Rift [Link]

Zombie Hunters Rejoice! ‘Arizona Sunshine’ to Receive Long-Awaited Full Locomotion Update [Link]

Mobile Room-Scale VR ‘Fully Doable’ With Vive Tracker Says Dev [Link]

Hands-On: KwikVR Wireless Kit For Rift and Vive Releases in March for Around $300 — Save Your Cash [Link]

SketchAR Lets You Trace Over AR Images To Become The Artist You’ve Aspired To Be [Link]

Dr. Oz is Using VR to Create The ‘Google Maps’ of the Human Body [Link]

‘Mindshow’ CEO: VR Lets Animators Do What ‘Would Have Cost Hundreds of Thousands of Dollars’ Before [Link]

 

 

Thursday, January 5th 

The actual show starts on this day and more news is underway. There’s a set of boots that enable you to feel the digital worlds you’re walking through and a candle that lets you smell blood while playing Resident Evil 7. The biggest story of the day was our full hands-on demo of the Intel Project Alloy merged reality headset.

This HoloLamp Projector Lets You See AR Holograms Without Using Glasses [Link]

The Taclim VR Boots Want You To Kick The Future In The Face [Link]

Indulge Your Sense Of Smell With An ‘Old Timber And Blood’ Scented ‘Resident Evil 7’ 4D VR Candle [Link]

Lumus Showed Off New AR Glasses With 55-Degree Field Of View [Link]

‘Monkey King’ is a New Cinematic VR Series from Digital Domain [Link]

The Pico Neo CV Is A Fully Untethered, Positionally Tracked VR Headset [Link]

Dell Precision 7720 Is A VR-Ready Mobile Workstation Using Nvidia Quardo GPUs [Link]

Hands-On at CES With Intel’s Project Alloy Standalone VR Headset [Link]

New ‘Arizona Sunshine’ Las Vegas-Themed Game Mode ‘Undead Valley’ Coming in Feb [Link]

Forget 4K — The Insta360 Pro Is An 8K 360-Degree VR Camera [Link]

Nyko Unveils PS VR and HTC Vive Charging Docks and Other Accessories at CES [Link]

CyberPowerPC Introduces New PCs Including A Line Of VR-Ready Laptops [Link]

Wednesday, January 4th

This is when the real downpour started — and the official date for CES isn’t even until the following day! The ASUS ZenFone AR is both Tango and Daydream ready, Lenovo showed a new Project Alloy demo, and the cherry on top was the debut of the Vive Tracker for new accessories and the upcoming Deluxe Audio Strap.

‘HOLO CUBE’ Lets You Hold And Play With AR In The Palm Of Your Hand [Link]

Eye-Tracking VR Headset FOVE 0 Costs $599, Starts Shipping January 2017 [Link]

Vuze Is An $800 VR Camera Releasing In March With 3D Audio Support [Link]

Ricoh Unveils The First Camera Capable of 360-Degree Recording and Livestreaming For 24hrs [Link]

Dell Unveils Inspiron 15 Gaming Notebook And Premium Alienware Hardware For VR [Link]

We Visited The First Oculus-Powered Bar In Las Vegas [Link]

Audeze and JBL Unveil Headphone Solutions Specifically Developed for VR [Link]

ASUS Officially Unveils The ZenFone AR, The First Daydream and Tango Ready Smartphone [Link]

Gap And Google Just Showed Us How AR Shopping Will Work [Link]

Samsung Confirms 5 Million Gear VR Mobile Headsets Sold To Date [Link]

Intel’s CES 2017 Press Conference Put The Audience Inside VR [Link]

HTC Announces Subscription Plan For Viveport [Link]

HTC Vive Integrates Audio With New Deluxe Headstrap Similar To Rift’s [Link]

HTC Announces Vive Tracker to Power Next Generation VR Accessories [Link]

AMD Debuts Extreme Performance PCs Powered By Their New Ryzen Processor [Link]

Project Alloy Debuts Multiplayer Merged Reality Gaming, Shipping in Q4 2017 [Link]

Tuesday, January 3rd

Even thought this was two days before CES 2017 “officially” starts, we were already seeing a flood of news and announcements. Lenovo’s holographic VR headset is set to make a debut this week and new glasses, cameras, and more are on the horizon. Qualcomm’s Snapdragon 835 processor is one of the main highlights, emerging with powerful specs for mobile VR devices.

Lenovo’s Windows Holographic VR Headset Debuts At CES [Link]

Asus’ $799 VivoPC X Is A Compact VR Ready PC That’s Prepped To Go [Link]

3DRudder Wireless Is A Cross-Platform Foot Controller That Lets You Move In VR [Link]

Tactai Gets In Touch With Ericsson For AR and VR ‘Dynamic Tactile Wave’ Tech [Link]

Hubblo Announces 360-Degree VR Camera That Makes Streaming 4K In 3D Portable [Link]

ODG Debuts Two New AR Glasses Aimed At Consumer Market [Link]

Qualcomm’s Snapdragon 835 Processor Aims To Supercharge Mobile VR And AR [Link]

Faraday Future Used VR To Design Their Flagship “Fully Connected” Car [Link]

Pre-CES News and Announcements

Before the show even started, we learned about a lot of what to expect. Wireless accessories will be on display, analysts and Upload staff alike are making full predictions, and there will actually be a large amount of content — not just hardware — on the show floor to see.

Upload’s CES 2017 Predictions [Link]

Upload’s CES 2017 Party Details [Link]

Wireless Rift and Vive Add-On KwikVR Set To Debut At CES [Link]

Accenture’s Top 5 Predictions For What Will Be Hot At CES 2017 [Link]

Pimax to Unveil 4K Per Eye, 200-degree Field of View VR Headset at CES 2017 [Link]

Here Are 23 Of The 30+ Experiences Vive Is Showing At CES [Link]

Tagged with: , , , ,

2017 Kick-Off with daydreaming of tango

No time to lose! 2017 is kicking off with the first AR news pretty quickly. Well, okay, this was expected. Every year people get the new announcements on electronics just after Christmas so that everybody turns disappointed with the newly bought gadgets in 2016. Well, in Spain people would still have time to shop before Reyes on January, 6th, but of course the new announcements don´t appear in the shops directly.

During the Consumer Electronics Show (CES) in Las Vegas we get cool new gadgets presented every year, right after it started (and we are still practically saturated by new toys or the Christmas duck). This time, the AR start was done by Asus showing off their (rumoured and expected) Zenfone AR. Some specs leaked before, but now we know for sure that it actually exists, how it looks and what it is capable of!

If you did not live behind a rock you will know about Google´s Project Tango and the Lenovo Phab 2 Pro, that was presented recently. The device was the first consumer device implementing Google´s AR technology that uses (among other sensors) a wide angle RGB camera (for SLAM tracking) plus IR depth camera data to get depth data from the world. While typical smartphone AR demos don´t know much about the real world, this approach that is similar to Microsoft´s Hololens implementation, let´s us integrate virtual objects way better than before! We have stable inside-out tracking for a phone, can do collisions and occlusions for the AR objects and can thus integrate the imaginary friends objects more convincingly.

The first phone supporting the full mixed reality continuum

While the Phab 2 Pro seemed more like another dev kit being impractical in size and battery heating, the Asus version get´s better (so it seems). The specs promise a high-end smartphone that fits into your pocket (maybe even your hand). But the most important fusion: it supports not only Tango, but also Google´s Daydream VR specification. It is the first phone to support Daydream VR alongside Tango! This turns it into the first phone, supporting Google´s implementations of AR and VR at the same time. (Obviously other phones like the Galaxy S6 support VR and could do “classic” RGB-camera based AR, too.) This new to-be release could be a big step into mixed reality integration into our lives. Most probably more and more Android (high-end) phones will support both systems and make the additional sensors a commodity like GPS until now. While we wait for all the cool glasses – maybe the roll-out of AR into everyday life will after all still happen through the brick in our hand – allowing for an easy transition!

Ah, you are a spec-lover? So, as said, the Asus Zenfone AR brings Tango and Daydream VR united in one smaller box. It has a 5,7″ screen with 2.560 x 1.440 px and huge 8 GB RAM. It uses the (not the brand new one) Snapdragon 821 chip and ships a back-facing camera with 23 Megapixel and all the typical other sensors needed for Tango, Daydream and a high-end phone. It runs Android 7.0 and is supposed to ship… only in the 2nd half of 2017. No price given. Darn!

But let´s wait and see who joins the circle next and what will happen next days during CES and during Mobile World Congress end of February. I´m sure the Zenfone won´t stay alone for long!

Happy new year! … and enjoy Johnny Lee and the other folks if you have the time: