Toy Story: One Former Pixar Artist’s Quest To Make VR Animation Accessible

Fighting Dory: One Man’s Quest To Create The Pixar of Virtual Reality

Today, a company called Limitless is announcing a suite of virtual reality animation tools that could revolutionize the way immersive films are created. This is the story of how they got there. 

Tom Sanocki was falling in love.

The object of his affection was not the waitress at the coffee shop, or the new girl from accounts payable. Instead, Sanocki’s heart was being captured by an army of  deadly Jellyfish that would one day attack two hapless fish on their quest across the ocean to rescue a missing son.

His SGI Irix desktop whirred impatiently under the strain he was putting it through, but the deadline was approaching too fast to be cautious. The dark offices were bathed in the soft blues and greens of the ocean emanating from his workstation, but one could still just make out the letters above the door: P-I-X-A-R.

Sanocki might have been working late, but despite this project’s long hours, difficult workload and a never-ending stream of technical puzzles to unravel, he had never been happier. 

Sanocki smiled victoriously as he watched the tendrils of the jellyfish before him undulating in the exact manner he hoped they would. Months of work were finally beginning to pay off and his grin widened even more at the thought of showing this to Andrew in the morning.

With a satisfied yawn Sanocki stretched and looked around the room that now resembled a large indoor aquarium one last time for the night. He was exhausted, but his smile never wavered. Just as he was about to get up and leave he noticed a small imperfection in the movement of one of the jellyfish. Rather than cursing and smacking the monitors, Sanocki felt a leap of excitement at the prospect of a fresh challenge. He leaned forward enthusiastically and got back to work. Because, in the end, Tom Sanocki is a man that loves one thing more than any other: solving problems.

From Princeton To Mater

Before he would ever become a character artist for Pixar’s Finding Nemo, Sanocki first had to make his bones in computer science at Princeton in the mid-90s.

He attended the prestigious university in the hopes of enrolling in the computer graphics program of a professor he respected. However, just a few months after his enrollment, that professor left the Ivy League to pursue his craft elsewhere.

This left Sanocki in somewhat of the lurch, but he did the best he could to support his passions through other means. By auditing a few animation classes and cracking (more than) a few books on the subject of computer science, he was able to scrape together a couple of clever short films and form the beginnings of an artistic portfolio.

Upon graduation, Sanocki received word from a few friends that their company, Pixar, was going through a hiring spurt and that he should seriously consider applying. And so, armed with his newly finished portfolio, Sanocki landed his first computer graphics job at a company already considered to be on the bleeding edge of his field.

Three months and a second graduation later, Sanocki passed the Pixar University training program and requested a posting in the characters department. He had found, through his months of study at the company, that characters would offer him both the artistic and technical challenges he desired for his fledgling career. He was assigned to a young director named Andrew Stanton and set to work on the militia of jellyfish that would one day stop America’s collective hearts in the movie theater.

Sanocki found an instant delight in his work at Pixar because, as he puts it, “we were making everything up as we went along, there was never a shortage of problems to solve and we were inventing most of the solutions. We were building stuff that simply didn’t work…until it did.”

After completely rebuilding the character pipeline process and solving the cloth-modeling problems for the jellyfish, Sanocki had helped create one of Finding Nemo‘s most memorable scenes. After that, he found himself taking lead roles on characters for several of Pixar’s greatest hits.

He worked on the problem of quadruped movement for Ratatouille, helped build a new hair simulation engine for the flowing crimson locks of princess Merida in Brave, but his biggest claim to fame came in the form of a rusted old pickup truck named after a fruit.

Mater (short for Tow-Mater), the busted down country pickup from Pixar’s Cars, is the closest thing to a digital son Sanocki created while at the studio. He wrestled with the complexities of animating an automobile with enough personality to seem human, and he designed every pixel to carefully add personality to the fast talking, slow driving lovable simpleton.

Today, Mater can be found on millions of lunch boxes, toy boxes, posters and other entries in Pixar’s merchandise machine. Over time, though, Sanocki, was finding himself with fewer and fewer chances to do what he loved at Pixar.

Manifest Destiny

As the decade turned over Sanocki was starting to lose interest in his work at Pixar. The studio had been so successful at solving complex problems in its heyday that there were now very few left with which he could wrestle. The company was shifting toward  sequels, and other properties that simply didn’t require as much creative muscle to produce. The systems were already in place which meant that it was time for Sanocki to find a new challenge.

Having nearly 10 years of experience at Pixar opened more than a few doors as Sanocki began his search for a new position. He fielded a few attractive offers but there was one that captured his attention more than any other: Bungie.

The legendary video game studio had recently lost its most famous IP, the Halo series, due to a round of corporate maneuvering with Microsoft. However, rumor had it that the ambitious group was working on something even more impressive — something the gaming world had never seen before.

The promise of uncharted territory was enough to get Sanocki on board and so he joined Bungie as its character design lead only a week after leaving Pixar. The project he began working on would eventually become the monstrously successful sci-fi MMO known as Destiny.

At Bungie, Sanocki helped devise a system in which a multitude of custom character varietals could be created by players without sacrificing the performance or consistency of the game. His work paid off handsomely and he stayed with Bungie for several years as Destiny released and began making millions through software sales and downloadable expansion packs.

However, once the game found its footing Sanocki began to feel that same old itch to move on and find more challenges to solve. His true love was calling and he wanted desperately to answer. His phone eventually rang and on the other end was a man that would change his life forever.

When The Future Came Calling

The voice on the other end of Sanocki’s reciever was one that he recognized well. It belonged to Max Planck, a 10-year veteran of Pixar and one of Sanocki’s oldest colleagues. Planck was calling to see if his old friend would be interested in trying out his newest toy: a prototype virtual reality headset being hocked through Kickstarter. They were calling it the Oculus Rift, and Planck had managed to score a Developer Kit 1, or DK1 for short, through a generous donation to the online fund.

Sanocki, ever-eager to be a part of something new, met up with his friend and strapped on the headset. His initial reaction was that this thing was absolutely making him sick, the resolution was terrible, and he had never been more impressed. In that moment Sanocki was convinced he was looking into the future and he removed the headset to find his old friend grinning back at him.

Planck wanted Sanocki to join a VR animation company to do for this new medium what they did at Pixar: build things that could never work…until they did. Sanocki thought seriously about the offer but had to decline due to his obligation with Bungie. He felt he couldn’t simply leave in the middle of a development cycle, but a seed was planted in his mind and he could never quite manage to shake the wonder he experienced while strapped into that DK1.

A few years passed in which Planck ended up going to work with Oculus himself and forming what we now know as Oculus Story Studios. Sanocki, as well, found the seed in his head had matured into a full blown flower of an idea, and he was finally ready to make the jump.

Startups and Seagulls 

In April of 2015 Sanocki left Bungie feeling that familiar desire to solve something new. This time it was not a swarm of deadly undersea predators, the bouncing curls of a Scottish princess, or the rusty bolts of a lovable pickup truck that had stolen his affections. This time it was not a character at all. This time it was a possibility.

Sanocki is now the CEO and founder of Limitless Ltd. (a small joke he takes infinite delight in), a company that is committed to pioneering the future of animation. The studio’s first real product of note was a short VR film titled Gary The Gull. Gary is about seven minutes long and features a fast talking seagull and his crab sidekick’s attempts to steal your lunch as you relax on the beach.

What’s unique about the project is that it is interactive. Gary responds to your words, your gaze, and the motions of your head throughout the experience. This ability to turn audience from passive observers to active participants in a film is what has truly enraptured Sanocki’s imagination these days and, fortunately for a man like him, it comes with plenty of new problems to solve.

The Toys Are Alive 

Tom Sanocki fell in love almost twenty years ago and now he wants to give others the chance to do the same. He is attempting to solve many of VR animation’s problems, and empower aspiring artists, through the release of Limitless’ latest product: the Creative Environment. This is a cloud-based suite of animation tools that harness the unique capabilities of VR to accomplish days-worth of animation work in minutes.

For example, a demo version of the suite lets you play around with Gary and pals in the HTC Vive headset. By grabbing Gary and holding down the grip button on your controller you can move him in a recorded animation path that is then sent to the cloud and stored. By placing an “animation bubble” on this floating blue flight path you can make Gary flap, squawk, or stop to your heart’s content.

According to Sanocki, even something as simple as getting a crab to scuffle along the ground a few feet represents perhaps eight hours of a traditional animator’s time. With Limitless’ creative suite it happens in seconds.

“Everybody remembers being a kit playing with their toys, picking them up and taking them through one crazy adventure after the next,” Sanocki explains. “We wanted to translate that familiar experience into a set of tools that make animation both fun and easy in virtual reality.”

In addition to the pre-set assets, artists can upload their own creations into the creative suite to start building their own stories immediately. More features and animations will be added over time, but for now that excited gleam is back in the eye of a man who has imagined new worlds and built beautiful systems to create those worlds for much of his adult life.

Limitless and Pixar have a lot in common in that they both began as sets of computer animation tools that only made films to prove that their concepts were effective. Pixar started off releasing wire framed, half finished short films about bumblebees just to show its ideas were valid, and ultimately it transformed into the juggernaut it is today. Sanocki says he is more focused on creating amazing systems for animation than producing animated features themselves, but he admits that the possibility for a similar evolution certainly exists.

The Limitless VR creative environment is currently entering a closed testing period. Animators can email the company to request a space, although access will only be given to a select few. A wider release is planned in the next three to six months.

Today, all that matters to the man that once battled a small blue fish with an armada of perfectly swaying jellyfish is making sure that anyone that wants to create can do so with ease.

The Future of Everything in MR @ Digility

Day 2 and the future

Hey guys, time to continue with my recap! Day two focussed more on start-ups, investments in VR/AR and on the future vision – how does it all continue?

The start-up track had many interesting presentations and discussions, but I won´t go into all details here. Especially since most of it tackled only VR – although many squeezed an “and AR” into their powerpoint titles. The overall message was that you must start with VR today to learn, that it will have great impact and that it can improve in all areas (efficiency at work, entertainment at home, deeper connection in journalistic documentary work, etc). To summarize this business view on the world, let me quote Wolfgang Stelzle from Re´flekt:

AR and VR is still in its infancy. The majority of companies´s [activities in AR/VR] are pre-product. So, it does not matter how you start – but please start now!

Everybody agreed on this during the day and the feeling was (again) that augmented reality will be bigger in corporate environments: managers don´t make a fool of themselves (you still act in your real world along) and existing tasks and product views can be augmented. The right integration of AR is crucial and big companies start doing so (e.g. ThyssenKrupp just ordered 24.000 Hololenses, as rumoured by Stelzle).

In the year 2525…

But where could this lead? What will happen? The main track of day 2 was looking deeper into these questions. It started off in the morning with IT guru Robert “The Scobleizer” Scoble and his vision.

To step us through that he was basically rounding up all cool demos (by others) that float around in the web right now. Be it the ILM xLAB example on Star Wars on Magic Leap, the facebook social VR experience with your virtual selfie stick, the Dokodemo Hololens video or Project Arena for sports in VR.

scoble

Robert says that we are in the 4th transformation in computing and that AR and AI will change everything. 1st, stepping into PCs, then mobile, touch interfaces and now VR and AR. We should have VR in our homes today to understand what will be coming. In 2-4 years AR should become very important and that within the next 6 years it will be totally mainstream – “the tech is coming”.
For him, Mixed Reality is the combination of six technologies that need to be tackled (optics, fast uplinks, eye & room tracking, spatial audio and categorization). Categorization? To recognize the objects around you and to identify these. What are these objects? What do they do? 3D image recognition plus artificial intelligence will do the major trick for big AR scenarios. Developing bigger solutions will only scale up with AI in the back.

But what about current devices? Bulky headset will be a thing of the past very soon, Robert claims. With the new Snapchat goggles in the background he states:

Tech in glasses will be as small as a sugar cube within 2 years! The glasses will get cheap and everybody will wear them all the time. 5 years from now your entire life and world is gonna be flipped! It´s gonna be an incredible time.

Scared? Let´s not be. Can´t wait to design this, guys, happy to be part of this new era – I`d say!

Losing my VRginity

Prof. Dr. Frank Steinicke from the University of Hamburg continued and went a bit backwards in the history of VR. The first VR wave in the 90s failed. The tech was just not ready: he shows us some old HMDs and locomotion devices – I remember trying that thing during CeBIT´96, it was really crappy! But a second reason could have been that computers became a boy´s toy in the 80s (we had the first wave of nerd series on TV back then – think of Riptide!). This social change and the (less socially centered) recognition of tech might have also hindered a flourishing, so Steinicke.

But now with all smartphone tech and computing power we just might make it this time! In 15 years computing power will be 1000 x faster than today (if Moore continues) and the uncanny valley should be done for (see the evolution of Lara).

If the immersion and representation is that perfect, we might be able to create empathy through AR and VR – and have the biggest impact on society then! The same idea was transported during the panel afterwards.

A world like Snowcrash… without the weird stuff

We should use the tech as an empathy machine. To connect with others and to create emotions. We should use it for the better. Often quoted great classics as the book Snowcrash or the younger Ready Player One could become a reality sooner than we think. We just need to avoid the weird stuff! People will be spending too much time in VR and AR. It will be a problem. Robert adds:

VR is ten times more effective than morphine. It´s a new drug!

Being so powerful, VR will play a key role in teaching and we must therefore watch out to get it right for the next generation. People will be crying in their headsets. But also productivity will go up once everything works untethered and with a higher resolution. No more screens will cluster our desks. All virtual screens will clean up the place. Tools like Tilt Brush are a great example of the new upcoming things: you just cannot do that without AR and VR. Here the new tech creates something entirely new. This is the direction we need to be going!

The Future of Everything

So, let´s use the tech to be more productive, more entertained, but even more to create empathy and to connect with others. How much time will it be in VR and AR?

Ela Darling talked about her virtual reality porn and adult business here. VR can enable your fantasies – you could live your dreams that are not possible in real life. You can be yourself and be genuine. You have the chance to really connect to someone. Both can walk away satisfied – be it physically or in a therapeutic way. (Ela says, the business is 20% sex and 80% therapy.)

The closing panel continued to dream about the best use of the technology – without talking about the technology itself. It was moderated by wonderful Monika Bielskyte (who did a great calm metaverse poem before). Howard Goldkrand (SapientNitro), Marshall Millett (Aemass) and Alysha Naples joined the great philosophical campfire chat. Let me only quote very briefly, though unfair.

magick

We must use the great opportunity we have today to let people understand and connect. Let´s not wait for the technology to be perfect until we can start thinking about these social questions. Let´s rather do little steps today and trigger small changes. Let´s go beyond borders and start something new. Completely new. Marshall:

If something like a zombie shooter game is the killer app for AR and VR, we have a problem!

Howard adds that we should disrupt the current state and that we should “be an amazing glitch in the system”. We can step out of our comfort environment into new worlds. Alysha:

We can create a landscape that gives you the feeling of being 100% out of your known system. If you can do this – why create a shooting game? Let´s create wonder with this!

Let´s create mixed reality experiences to let plurality exist, Monika adds. Let´s envoke tolerance and curiosity. Let´s use VR to help us make better decicions in life!

allfuture

The nice wrap-up and look into the future went on like this for a longer time and was a perfect final moment for the conference. How will the metaverse grow? Will it flourish uncontrolled or proliferate? How can we use the tech for best? Could we go for a “What-if”-machine with AI to visualize alternate versions of reality? How can it help humanity?

Conclusion

Well, not so easy to have a simple tweet-length conclusion here. The conference sure touched both sides: the technology/business and the social implications it might all have. Chapeau, for this, Digility! On the tech side it would have been interesting to see more behind-closed-doors demos in AR (come on, every big company has a Hololens right now!) to get updated in a bunch. Probably this will be due next year at Digility then. For now, it could have been a bit more in AR in practical use cases. But the abstract discussions touched nicely both – AR and VR.

Well, today, it felt good to briefly quote more on the social impact side of all this. We might be at a dawn of a new computing age that will change the world like the industrial revolution or later the internet!

So, let´s take our time, slow down and create something great. After all, it is in our own human engineering and artisty hands to do so! :-)

End of file.

Think beyond Screens – Creating Worlds in AR/VR @ Digility

Day 1, Part 2

Hi guys, as said I wanted to continue my report on the Digility event in Cologne last week. It´s not my job to summarize everything (the hosts would sue me giving away all info for free), but I`d love to continue giving an overview of what happened on day 1 during the development track. It got a bit more technical at first – talking about e.g. performance by nvidia, hardware (GoPro), indoor navigation with Project Tango (“Google Tango will be a gamechanger for indoor navigation, google wants to scan the whole world … and they want (our) Data”) or even how to orchestrate 360 sound for VR (Berlin Phil Media). But then it jumped more towards content, use cases and storytelling and the meta level of all things.

Boot Camp for Reality

As a bridge to it, I want to briefly mention the talk from Clemens Conrad from Vectorform. He started by pointing out to a classic quote from Benjamin Franklin, but if you haven´t heard it, it´s a good jump into VR and the question why VR can help us out. Why should I learn within VR?

“Tell me and I forget, teach me and I may remember, involve me and I learn.”

They build a virtual training software and ecosystem to let people learn better and faster. They want to increase efficiency and satisfaction with less errors or casualties (e.g. for jobs with high consequences – being very dangerous). Here it makes total sense to spend some time in a (safe) place. E.g. another company SoftVR showed a multi-user training scenario in VR where you needed to work in a radioactive power plant. Training the right steps can be very critical. It´s better to train it deeply in a boot camp first for sure.

Doing user tracking and analytics was shown as a great tool to optimize the training results. Did the user bend over too much? Can he or she reach the handle above the head? VR matches here perfectly since you must move naturally through space. Training can be customized personally – if a user performs badly the system could give more hints – and less hints for the pro`s.

Close The Gap

John Peeters from Holition showed more specific AR examples out on the street and talked about their experiences so far. He sees digital tech as a good tool to connect people, but at the same time it happens that people don´t want to use tech in public. People are afraid of looking stupid.

mcqueen_media_grid_6

Small working examples are digital try-ons with an augmented mirror (jewelry, watches, …), but there are also big fails in this field. If the digital prop jumps around or looks crappy you should have rather not used AR to show it. The digital add-on needs to be relevant for the store and come with great quality. Otherwise it`s better to stick with this:

Don´t always force yourself to use the latest technology. Try old technology with a digital twist! Never start thinking about the technology. Choose the right tech for the right job!

E.g. they did not augment a full jacket to the person in front of the mirror but rather used one physical jacket that could only be adjusted in color (making it easier to try on different versions with a single gesture).

He proclaims that AR will be big in the beauty sector. Digital make-up, make-up transfer previews from friends or celebrity photos onto your own face – or (awesome idea, I`d say) digitally augmented make-up tutorials (Hey, youtube-lady Dagi Bee, go AR make-up now)! Go beyond the glass frame of your analogue mirror!

“Remove the glass ceiling”

So, time to go meta. Enough technology! What can and should we really do with all the tech? How should we do it? Ola Björling from the MediaMonks gives us an idea how to approach VR by thinking about human perception – and how to hack it. Typically, a story in, for example, a book will only come to life in your brain. You visualize it with your imagination. It´s not that direct. Even TV shows or movies need to bridge the gap between the rectangle (frame of the TV set) and us on the couch:

VR and AR now hold the potential of taking out the translation and interpretation of the presented information. It becomes a direct sensory experience! VR will not be at its peak when we figure out how to film 360 – we must look beyond. We must start all over again to reach its full potential!

Ola states that we must raise the quality bar dramatically for mixed reality experiences to keep people convinced: “The uncanny valley is extremely deep in VR”. To reach it, everything matters. If haptics are involved they must work flawlessly (touch a reel steering wheel of a car in VR must work by the millimetre). If music and sound is there (they should be) it can become the deal breaker or the one feature that sells it. Music can be the “hotline to our emotions” and can create moods easily. Optimized spatial sound will help out in a subtle way and must play a key role! With all, we must “aim higher to reach deeper”.

The 4th wall is down

Astrid Kahmke from the Bavarian Film Center shares this approach and talked about the design of the content and the stories: Presence is key for VR magic!

If quality is high enough (visually, sensory inputs – but also content) we can get immersed and convinced. The 4th wall is down and there is no distance between the narration and the consumer (=actor) anymore. You are active in the story.
The classical Campbell hero´s journey with it´s arc of suspense in story telling becomes the user´s journey. New spatial narratives will push storytelling forward once again. Storytelling will and must change. People can interact even further (than in “normal” games for instance) and the designers need to design their worlds and stories in a way that we can connect with the protagonist (= ourselves) to believe the story and to get emotionally attached. Also, people tend to enjoy their freedom in VR – let them run free. Don´t manipulate them. Leave space to relax and enjoy. Storytelling needs to relearn its toolset!

Though Astrid loves VR and AR storytelling, she reminds us, to not blindly fall for it. Does my story make sense in VR? Does it really fit? It´s the moment of truth. We should use the right media for the right story. Rather use a screen or pages for your story, if the result can be better. Your story counts, not the technology!

I`m afraid of Augmenticans

Speaking about what counts: Alysha Naples entered the stage again and reminded us once more what matters most. (Again, some moments just don´t work in written summary or even in a virtual reality. But let´s try.) Will we go beyond screens? Probably yes. What happens along the way? Will we get it right? Will we adopt?

Alysha repeats once more that humans don´t change. Technology around us does. But humans still live a physical life, have a family, relations, they are eager to learn, to share, to play and to explore the world.

Mankind is (unfortunately hopefully) to stay. Technology changes over the years and should work to our best and meet our needs. Looking at today´s approaches we always see a screen in front of us. Let it be a rectangle or a stereoscopic panel. Let it be video-see-through or an optical see-through overlay – we still observe and experience the world through a mediator. Today the mediator can be easily identified – a TV, phone or computer screen – and can be easily turned off (see illustrations picked by me).

smashtv

But once we have the filtered reality through glasses or contact lenses it might become hard to escape. We have big potential to get it wrong! – I would also add that it becomes very dangerous and we are likely to turn into zombies that are even more dependent on the big corporate input from the glasses/eyephone manufacturer. The closed-system mixed realities will steer us into their ecosystem to consume – if no open metaverse evolves. (Well, let´s keep that for another day.)

futurama_06_0603_act2

Alysha warns us to not only think about the wearer or user of AR glasses. We must not think hard about the cool and crazy content for him or her – but we need to think about the non-wearers, the have-not´s. What´s with the other people in the room? How does AR and spatial computing affect society? Will we turn again into glassholes? Will we be running into a digital divide with one half being afraid of the augmented people? Will they have a good reason to be?

We don´t want a hyper-reality as Keiichi described it multiple times in his videos. We are smarter than that! We will be clever enough to make it right! We are awesome! Oh, really? A good example is the image below to rethink our arrogance:

giant-traffic-jam

The inventors of the automobile surely didn´t plan for this to happen. They wanted to create save, cheap and fast transportation and democratic access to it. Not so sure anymore, that we will make it, right? So let´s think about AR and VR for a bit:

What we create today will affect the future long-term! We must slow down and think about our decisions!

So, how to get it right? We need to ask the right questions. Simple example: don´t ask “How can I type in VR in mid-air?”, but rather ask “Does typing in the air make sense?” Alysha wanted to throw the Hololens out of the window when she was forced to enter a WiFi password in mid air (I can totally relate). We need to focus on the essence of things. When I want to text in AR/VR – what do I really want? I want to communicate asynchronously. I don´t necessarily want to enter text in mid-air. We must go abstract from known approaches. We must go meta. Let´s make room for the new and use the tremendous opportunity that lies before us today. Let´s not do a twitter client for our eyeballs, please!

Important aspects need to get addressed by us today. Will people fall into the virtual world? Will they fall into the screen (well, without the screen boarders)? The more you belong into the screen space communities – the less you belong into your physical space real life community. So, where do we want to belong? Probably both worlds. So, Alysha:

To belong, you must be who you truly are!

Let´s keep this in mind, let´s not get wild about technology. Let´s take a step back and use it to our advantage. To realize our fantasies, to reach our true goals, to create great tools, to connect.

Today we are at the crossroads of things. Let´s not make the wrong choice. Let´s not rush into the prettier looking green forest. Maybe we need to take the detour – the hard way – to get things right for mankind.

… To be continued next days…

‘Alien: Isolation’ is One of VR’s Missed Opportunities, But There’s Still Hope

Alien: Isolation is one of the best VR games never released. We look back at why the title was so revered by VR enthusiasts and that there’s now finally hope we’ll see an official VR release after all.

One of the best video games based on the hugely popular Alien franchise in many years, Alien: Isolation was taut, tense and just plain terrifying in places. The title was received warmly by critics upon release and seemed to indicate a return to form for a franchise which had suffered a seemingly endless string of sub-par video game entries. Everyone was happy, with the exception of VR enthusiasts.

You see, in the run up to Alien: Isolation‘s release on standard 2D gaming platforms, Oculus had featured the title prominently in its showcase line up whilst attending various gaming trade shows throughout 2014. Then demonstrated on the Oculus Rift DK2, people were treated to a special made-for-VR demo which had the player trying to escape the clutches of our favourite xenomorph, and it was a huge success, reported widely in the gaming press, such that Alien: Isolation‘s became one of the most anticipated VR releases ahead of the Rift’s consumer launch.

alien isolation oculus rift virtual reality

And then, nothing. Prior to the game’s release in 2014, Eurogamer asked Creative Assembly what was up with VR support in the full game and the studio stated that “At present, it’s just a prototype and does not represent a game currently in development at this point in time. It’s a truly amazing experience though and brings the game to life in ways we could not have imagined when we started the project. It’s one of the most terrifying demos you’ll ever play.” The title eventually disappeared from Oculus’ showcase list and the game was launched with no mention of virtual reality support.

Many in the VR community were disappointed and some more than a little angry that such a promising VR title, one that could have been such a powerful ambassador for VR as a gaming platform would now not materialise. However, to some community members, it seemed likely that the advanced level of VR support demonstrated in Alien: Isolation at trade shows indicated that a significant amount had effort had been put into the game as a whole to make it work. Therefore, it was pretty unlikely that support would have been removed entirely from the game prior to release and instead it was merely hidden, waiting to be unlocked again.

See Also: Alien: Isolation in VR is Beautiful and Terrifying
See Also: Alien: Isolation in VR is Beautiful and Terrifying

Sure enough, within just a few days of Alien: Isolation‘s full release, community gumshoes found that altering just a few lines of config files were enough to enable support for their Oculus Rift DK2 headsets. When this was done however, it was immediately obvious that the game’s VR support was even further along than many had hoped. With some minor exceptions, this game was fully playable in VR and what’s more, it looked incredible!

That’s not to say there weren’t problems. The very nature of Alien: Isolation‘s gamepad based locomotion and resulting yaw rotation meant that it could be uncomfortable for some and scripted moments in the game wrestles camera control away from the player, a definite ‘no no’ when it comes to VR comfort. These challenges, along with the fact that consumer VR headset releases simply hadn’t happened yet, would probably be the primary reasons as to why VR support wasn’t included in the released game.

Unfortunately, as Oculus’ development towards a consumer headset continued and their drivers and SDKs advanced, Alien: Isolation‘s unmaintained VR support became deprecated and it is now no longer usable without some serious hacking about on older runtimes. Which means, those wishing to sample the game’s immersive delights on their consumer Rifts (or Vives for that matter) were, to be blunt, shit out of luck.

Jurgen Post, COO Sega Europe
Jurgen Post, COO Sega Europe

Recently comments made by Sega’s European boss Jurgen Post, have stirred hopes that a fully VR enabled Alien: Isolation may now surface after all. Speaking to MCV, Jurgen said that “VR has caught the whole company’s attention,” going on to state, “We have a lot of VR kits in the office and people are playing with it. We are exploring ways to release games. We’ve not announced anything, but we are very close to making an announcement.” Heartening indeed, but Jorgen then went on to allude to titles which might be first on the VR release roster and, predictably, Alien: Isolation was mentioned. “We did Alien: Isolation about three years ago on Oculus Rift, it was a demo that was bloody scary,” said Post. “To bring that back to VR would be a dream and dreams can come true… VR will take time, but we will start releasing some titles just to learn. It is a platform for the future.”

Alien-Isolation-2-1280x720

Nevertheless, there are some in the community who feel so passionately about being given the opportunity to experience the game on their consumer grade VR headsets, that they’ve started a petition to urge Sega to return to the title and finish what they obviously had begun years ago. The movement currently has over 750 signatures, and if you’d like to join the cause, head over to the Change.org page right here to show your support.

With virtual reality’s consumer push now underway, the need for substantial, triple-A content to entice people to buy into this fledgling technology is stronger then ever. Alien: Isolation, if done right, could be one of those key titles for Sega.

The post ‘Alien: Isolation’ is One of VR’s Missed Opportunities, But There’s Still Hope appeared first on Road to VR.

Diving into Digital Realities @ Digility AR/VR Conference 2016 in Cologne

Recap Day 1, Part 1

Wow! Two exciting days are over at Digility conference in Cologne. Two days fully crammed with a lot of interesting talks, demos, panel discussions, technology, visions for the future, awesome ideas and fantasies and fantastic people! Now I´ll start my poor attempt of transcribing and recording a glimpse of it and to transport a bit of the people´s ideas to you!

The conference was split into six pieces basically: workshops on both days, the main track “brand experience & best practices” on day 1, “developing for VR & AR” on day 1, the future vision for mixed reality on day 2, the 2nd track about start-ups and investments on day 2 plus demos from the partners. Today, let me start to talk about the main track from day 1 and let´s dive into the introduction and the best practice reports from the industry. I won´t cover all talks, but rather give a good idea of the thoughts on a red line.

A new hope, a new conference

Katharina Hamma from Koelnmesse introduced the show in the beginning. For those who hear about Digility for the first time: this is a new conference, hold the first time. It was co-located to photokina this year in Cologne´s huge fair center and (nobody paid me to say so) professionally organized and executed with about 1000 guests. They (obviously) focus on AR and VR and the upcoming industry around it – to learn about the technology, the stories and to connect with other spirits in this field to push development in it and discuss social impact of (my) favourite technologies. That´s why I became official media partner to get you guys a report, too. (See my live tweets for more.)

The braveheart speech to kick things off

alysha

Alysha Naples stood up (though still too early in the morning) and set the scene for the two days. Let´s go wild and talk about the future – no wait! Let´s first make a reality check! Human beings stay the same, we have the same needs. Although the mode or physical form may vary between decades: people pack the same stuff they used to pack ages ago (only changing the mode from physical paper book to ebook for instance). People are just physical. When we move half our stuff into the digital space (onto our phones in 2016) they leave the physical space – but still exist. Now the problem comes along: when we do work on our phone or computer we leave the physical space (well, ok, our brain does) and focus on the screen space (and its virtual reality inside). This mode switch between the two spaces is key and a major problem! This concept or trying to avoid this switch holds huge challenges – but also chances – for mankind … and cannot be overestimated. How can this switch happen with glasses? What if I keep seeing the real world along? What if it´s all VR´ed? How do we need to design interaction? We must not carry over old metaphors to a new media (Did you ever catch yourself trying to pinch-zoom or tap-select a line in a physical book thinking about your ereader? Bingo!)! We instead need to create something completely new for the new devices! We must remove the frame that separates us from the data on the screen! We don´t need the screens anymore! Let´s go inbetween! It´s time for something completely new! Break the screen that seperates both worlds. (Time to look at A-ha below and start singing…)

a-ha-take-on-me

Grounded Technology

Gotcha. That´s the plan, Alysha! Sounds awesome! So, what do we have today out on the streets? Audi`s Marcus Kühne kicked off the best practice examples giving insights on the past to future of the sales process at Audi. Again, people stay the same! The sales process did not change over the last 60 years basically (going to a dealership with your wife, take a look around, see a few real cars, get talked into a car, make a test drive, …). But the new situation in the 21st century is: the car (or any product) has become far too complex to grasp everything. Too many configurations are impossible to be presented on a limited space, too much technology screams to get explained. This is the moment where VR (and AR, too) can stand up and shine! Help us, awesome mixed reality continuum!
Easier said than done… and here we dive into the technical issues. One example: the construction data (CAD-data) of the cars have more than 50 mio. polygons, but they must get shrunk down to max. 5-7 mio. polys, so Marcus said (typically game engines stay even below that of 1-2 mio.). Their attempts started with 50 fps and 45 ms latency (well, quite nice, but just not enough for VR). In 2016 they reached 90+ fps and a 20 ms latency (quite allright for VR) with their partner Zerolight (I mentioned it before).

audimoon

But besides the tech there is more to it! You will run into conceptual challenges. If you want to sell your car with VR you can and must engage the user more. So, Marcus:

People get excited when you let them do impossible things!

You can trigger more emotional engagement when doing the impossible (like flying you to the moon) and the unexpected. Hence you should do it! Also, people love to interact themselves. Let them run free and create your demo or experience this way! Make it easy and don´t limit the user to a path on a string. Don´t go the standard sales tour! Once again, exploring and engagement is key to success. (By the way: check out our (German) podcast VRODO TALK we had with Marcus a few weeks ago if you are really into it!)

More lessons learned

Dirk Christoph from innoactive continued and hopped onto the stage to show us their take-on with VR for the Media Saturn kitchen configurator that can be found in German stores. They stumbled upon many challenges as well and talked about their issues when building up a VR Point of Sales system (PoS) as a software company (coding nerds don´t like hardware problems, I`d say). Good convincing content – you want to sell the kitchen in the end – with a flawless user interface for first-time users needed to be found. So, Dirk shared their lessons learned with us.

vr-kche-innoactive-04

A user survey revealed nicely that 88% of the people were (in general) comfortable wearing a head-mounted display and that 73,5% percent could imagine buying products directly in VR, while less than every tenth user got some motion sickness. 88% would even love to design their own virtual environments (though we don´t know if the survey was representative or if mainly nerds or artists were asked). Overall the presented realism, latency and interface worked well for most, but at the same time people complained about the bad resolution (27%), the weight and (un)fit of the goggles, the cable and the sweat-factor.

Again, engaging the people is key, Dirk confirms following up on Marcus´s talk. You need to make the experience fun and easy (don´t overdo it with complexity). Possibly even integrate small mini games to call to the inner child of the users. E.g. let them chop the cucumber on that kitchen table – after all it´s a kitchen you are trying to sell. Even though this does not have a direct sales piece, one could say. I believe it actually does: is the table high enough, do I feel comfortable cutting the cucumber here next to the air vent of the oven, etc. People just like to be people and act normal – also in a virtual world.

Speaking of acting normal…

How do you act normal in a windowless capsule while being shot with 700 miles per hour through a message-in-a-bottle tube? You get nervous? I bet I would. You are claustrophobic? You go crazy? Probably. Let´s avoid it, please! Guys, come up here!

hyperloop

So, Dirk Schart and Wolfgang Stelzle from Re´flekt explain how they augment the windows from the hyperloop transportation system from Elon Musk with their approach. Windows? Didn´t I just say there are none? Right. The windows inside the capsules are just window frames with a 4k OLED screen behind it, mimicking windows. People just like to be people and to live in a real world where they know what´s going on. Again, Alysha´s proclaimed switching of spaces is a problem. Is the real world right and in focus or is it the screen window? Does it fit together in our brain? The user needs to have at least the feeling of knowing what´s going on. The same applies for the hyperloop: a feeling of movement, speed and the environment will help us to stay sane (No windows and no beer make Homer go crazy – I`d say).

arwindow

The technology needed for this to happen is (in this case) a regular screen plus a RealSense depth camera to track your head´s position and viewing direction. With the calculated position you can set the virtual camera for the rendered environment presented on the screen = window in real-time. Currently the user closest to the window will get the right perspective – but future updates might result in multi-viewer experiences for a single window screen.

Problem solved of going crazy! But travelling would still suck the same way it does today. We can do more to make it more interesting and entertaining, Dirk continues. Exchange the environment from a real world e.g. California landscape to something fantastic (I´d say use the Snowpiercer landscape! (Worst-movie-ever! – though I love Tilda)). Obviously you could also show additional travel information, a movie, a game or personalized (I heard it coming) ads (d´oh).

But honestly, this is not really AR to me. (Though Re´flekt sure knows what true AR is!) Here it´s just a good real-time head-tracking problem to present an interactive shop window screen. But, hey, it´s a first great step and I´m sure there will come more…

…maybe we will be able to see this technology on transparent windows in the German trains soon! (Re´flekt is evaluating ideas with the Deutsche Bahn and their Innovation Train.) So, we can see: more players are jumping on-board of the VR bandwagon. A good time to jump right into the panel discussion in the afternoon where the discussion on the lessons learned continued.

Don´t wait too long. Start today!

In the panel we had Audi, Ola Björling from the MediaMonks and Sven von Aschwege (Deutsche Telekom) with Alissia Iljaitsch moderating. The tenor was the same with all of them: today VR might be an early adopter advantage, but soon enough it will be an established (and expected) technology. You will fall behind if not taking part now. Better start today with your first prototypes and learn over the next one or two years. But if you don`t: bye-bye! In 5 years you must have VR, Marcus said. Ola added:

VR is the biggest blank canvas ever! Go creative!

Creativity gets even bigger in VR. You can do more with it. But you also must do more than using the technology. Reach the people emotionally (once the hype is gone the tech and funky effects will rather be in the way than help) with great content. Once again the buzz-word bingo wins while being true: content is king! (Keep this in mind for my upcoming recaps about Digility.)

Well, ok, let´s stop here for today with this classic (but true) phrase! As you might have noticed the industry talked mostly about VR on the first day here. You could also see that represented on the demo floor. Countless Vives, Gears and Rifts were shown, but only a single Hololens found it´s way to the conference (afaik). But the queue confirmed huge interest and there will be more coming up on AR. AR will get even bigger than VR. It will more dramatically impact business and our society… We´ll get there soon enough (I mean in one of my next posts, but hopefully / I`m afraid possibly in real life, too)!

To be continued (very soon) …

Rugged AR for the Industry

Let´s talk about head mounted tablets, allright? What? Tablets? Yep, you read it right. What´s that and where do you use it in heave rugged outdoor industry scenarios? Let´s find out.

I had an interview with Andy Lowery from realwear – actually I had the first interview since they went public with their upcoming device and the press information! Today, I`d love to share the long talk we had about AR, how the information revolution will affect our lives, their specific hardware and plans of course and the advantages of head-mounted systems. I also had the chance to try out the latest prototype device. But since the interview was just too long to quote it 1:1, let me sum it up for your convenience.

Andy Lowery is surely no noob to the AR scene and the industry. If you work in this realm you must know that he used to be the president of DAQRI who also produce smart helmets but with a different focus according to Andy. Before that, he was Chief Engineer at Raytheon (on electronic warfare) and way back he came from the US Navy being a nuclear surface warfare officer.

During the time at DAQRI he was pushing industrial AR, also working with partners like metaio. The idea came up to fix AR technology to any worker`s hard helmet in the field where needed. Since people wear the helmets anyway it would just be a winning combination and logical step to add the technology that can be easily put on and off along. Since DAQRI was going for another roadmap, he founded realwear to push into this direction. Chris Parkinson from Kopin (who produced the Golden-i werable system) joined forces and they are getting closer to their release. Time to take a look at the device!

What can it do for whom?

Andy described the history of the development and that in the environments of their clients listed a few special requirements. Other smart glass competitors (like Vuzix) might not comply to these, e.g. in the oil, gas and mining business (out in the field) they need to come up with a ruggedized, dustproof, waterproof – and sometimes even fire or explosion withstanding – design. Realwear describes the system as follows:

“Featuring an intuitive, 100 percent hands-free interface, our forthcoming RealWear HMT-1 brings remote video collaboration, technical documentation, industrial IoT data visualization, assembly and maintenance instructions and streamlined inspections — right to the eyes and ears of workers in harsh and loud field and manufacturing environments.”

They were thinking about the design “what would I do with an industrial device? what would it look like?” and two major aspects are key: it needs to be hands-free (people are working while using the system) and non-intrusive (safety reasons). Andy said people in the field just reject gadgets where you need to use your hands on glossy touch screens (“This is ridicoulus!”). So the complete system is speech-controlled and can be pulled out of your field of view with one move.

realwear_portrait

Let´s do the Live Demo

So, I could try on the latest design. The first impression was that you don´t really notice the weight. It´s as comfortable as it can get with a hard helmet on. The video screen arm can be easily adjusted so that you have it directly in front of your eye (either left or right) or in a peripheral position to only look at the screen while looking down (to have a clear sight to the front). Image quality and brightness looked good, too. Though I had to find a sweetspot distance and angle first to be comfortable for the longer session.

Then I was looking around the menu and could trigger all commands through speech: open a document, change zoom level, close a window, play a video, open a report, write a report, take a photo, etc. Recognition of these fixed keywords was stable and only got triggered by me (not by the others in the room saying the same phrases). The given tasks worked flawlessly and some small but helpful features make the interaction easier. For example, you can either zoom and pan a document (e.g. a circuit board layout) by speech or alternatively activate a mode to virtually fix the document in the air and change the presented part by moving your head.

Speech recognition is bigger when connected to the cloud so that you can use natural language to write reports by voice, it is restricted to the key words when working offline (currently).

The overlay happens only in the small screen and you get head tracking by gyro and compass. GPS will give your position and the camera can do additional vision based tracking. But there is currently no “immersive AR” as Andy names it: no accurate overlay of information is present today – but can be in the future if the market needs it.

So, he could not show all features since the system was not hooked up to the cloud and company data, but we then talked more about the fields of usage.

Use Cases and Advantages

So as said, they target industry like the oil, gas and mining market, where staff would use their systems on oil rigs, oil platforms or in dangerous spaces. People get instructions, measurement data or blueprints presented. A remote helper could connect to them via telepresence and communicate with the user to support the current task (also adding drawings or markers into the field of view from a distance to point to the right spot). Training scenarios keep being important, too.

For trainings Andy mentioned different studies that showed the advantage of an AR-supported training. E.g. Boing did a survey, where 50 students needed to buid an aircraft wing out of dozens of parts within roundabout 30 minutes. They were untrained and never did this task before. Three groups tried three different approaches: 1) using desktop instructions, 2) using hand-held tablet instructions and 3) using hand-held instructions via a tablet but with an AR view right at the object of interest. The results showed a clear improvement in speed and error-free work with AR. With AR the task was finished 47% faster and only caused 1,5 errors instead of 8 errors average. Other studies even showed a training result with AR that was comparable to an “old school” training with a personal human tutor (and crushing paper instructions totally). These promising results still used a hand-held tablet – the numbers would, so Andy, go up even more when being hands-free.

We talked about other use cases, too. These could be homeland security or the police officer – supporting their tasks with facial recognition (check for registered bad guys) or license plate checks on the go. In general connecting the system to the cloud and big data in the background could dramatically change our digitally enhanced working life. But what is crucial? The interface. Andy stated:

“The 21st century user interface does not have menus, file structures and all that stuff. It knows what you are looking it, knows where you are, knows what you are about to do.”

Systems like amazon`s Alexa or Siri, any intelligent device that has enough information – best including spatial awareness will predict your actions and help you out just in time. Talking about your industrial working day the system will also know your current assignment and role and give best matching information accordingly. Systems liek SAP, Thingworx, etc. will be able to connect themselves through the SDK to get this vision working.

With wearables that react to your point in space and time and your current activity an information revolution will happen, Andy assures. It will take (much) more time, but it will happen and be a big game changer – comparable with the dawn of electricity (bringing power tools to the masses starting the industrial revolution and democratising the technology).

Head Mounted Tablet – The Specs & Software

The details on the specs can be found on their page. The system runs on a regular Android – using tablet technology in the end, hence the name. A lot of software has been developed in the past for (rugged) tablets for outdoor usage – these can be easily shifted over to the wearable. The device comes with a battery life of 6-12 hours and has the option of hot swapping batteries during run-time without losing up-time. The camera holds a 16 megapixel chip and stabilizes the image actively. It comes with all the typical sensors in a rugged design.

Getting it on the road – My conclusion

Well, the final design is not available today. So, I couldn´t really give a verdict on the upcoming HMT. But if you are interested: they will start a “pinoneer program” to let you take part in the beta and get the first wave of the device. Final shipping of the device is planned for summer 2017 for $950.

So for now I can only say that the current design already feels lightweight enough for a full day and the complete speech control makes perfect sense in the given environment and worked well in the demo. Connectivity could not be tested, but I can imagine with a regular 4G or 5G uplink you will be able to get your work up and down. It would have been nice to see more real life demo scenarios to judge about the workflow and usability.

AR in this device is no funky AR as we love to see it – and expect from Magic Leap or a consumer Hololens. It just gives you a video screen plus overlaid information on your camera feed displayed on your mono screen. But it feels like a realistic, down-to-earth 2016 use case for the technology for that field. “It gets the job done.” and improves already alot – compared to traditional systems (paper manuals, phone calls, fly in the technician instead of his/her telepresence). We can imagine how things will get even more exciting once we have perfect AR overlays in it. A glimpse of more AR in industrial scenarios as described above can now also be seen in a new Hololens video vom Thyssenkrupp. Although, the Microsoft design is obviously not rugged at all, it gives you an idea.


Banner photo (C) RealWear, Portrait photo (C) Tobias Kammann

Interview with Fyusion´s CEO Radu Rusu on AR content generation

Last week Fyusion went live with their video showing off their new app that lets you generate 3D models of any real world object quickly and robustly. The video showed a man walking around a woman while holding up his phone towards her. Later on we see him wearing a Hololens and presenting the holographic version of the lady floating in mid-air.

Fyusion claims to have a “patented 3D spatio-temporal platform [that] uses advanced sensor fusion, machine learning and computer vision algorithms” to achieve their results. But does it work out of the box? How can we use it? Time to check for some answers.

Today I had the chance to briefly chat with Radu B. Rusu, theo CEO of Fyusion, Inc. Though there was not much time, I could ask a few questions about Fyusion and the future of AR content generation. So, let´s take a look at the video and jump right in afterwards:

augmented.org: Hi Radu. Thanks for taking your time. We´ve all seen the cool video and are wondering who is behind it. Could you please describe briefly who your team is? Since when do you work on it?

Radu Rusu: We are a team of PhDs and engineers with backgrounds in robotics, machine learning, and 3D computer vision, who have been working on a novel, advanced spatial photography visual format since 2013.

Great. So, how did you come up with the idea for your product? What drove you?

We believe that in this new digital age, with all the new sets of challenges and opportunities we are facing, we cannot rely on legacy visual formats created 100 years ago (such as 2D photography and video), but rather require a new way of thinking about the world in terms of 3D, in terms of space.

Our products include both a comprehensive platform that is currently being deployed on more than half a billion devices with many commercial partners, and our own suite of applications (including Fyuse) built on top of this platform.

Speaking about devices: the app works on iOS and Android today. But do you also already offer a ready-made Hololens app to record and show content for example? If it not available today: when will the app be available for those devices?

We are waiting for the Hololens device to be available for the public first, before committing to releasing a consumer product. In the meantime, any Fyuse representation created using our consumer application is AR ready and will be able to be shown in Hololens in the future.

Great. So, let´s talk a bit about the technology. Do you create full 3D models of the scanned objects? Or is it currently only a number of 2D images that change when moving around or tilting your device? From the video we cannot really tell.

We generate a 3D representation of the captured subject and display that to the viewer.

So, when converting it to 3D you key out the background like shown for the hololens demo?

Yes, our processing pipelines involve the creation and modeling of holograms from our own 3D spatial photography format (i.e., fyuse) with high accuracy, thus removing background elements from the subject hologram.

How is the created content stored? Can the data be reused and shared in other applications?

Yes, as long as these applications use our SDK.

OK. So let´s take a look at the SDK asap, please! Will your software and SDK be available for other devices like Tango, Vive, Oculus or for example the Meta?

We are currently focusing on AR devices only, but are investigating the potential marketplace for VR as well.

Cool. So, can you give us your thoughts on what needs to happen for AR to be successful for the broad public?

We believe that for consumer AR to be successful, anyone should be able to create content for it. Currently, there is a small number of content creators and an even smaller number of hardware players. Each is seemingly dependent upon the other to make the first move. With Fyusion’s technology, all content that is created on the platform today (including with the Fyuse app) is currently ready for AR and we’ll be announcing major partnerships with top tier AR hardware producers soon.

So, what other plans do you have for the near future?

There are going to be several announcements in the next few months regarding other technological advancements from our company that we unfortunately cannot disclose at the moment. All we can say is that we’re just getting warmed up ;-)

Sounds great! Can´t wait for the news. I`ll be happy to see more from you guys and to be able to generate 3D AR content with it. So all the best and thanks a lot for your time, Radu!


So, it seems we can expect more coming up from fyusion. Let´s wait and see! It is definitely a great next thing to have 3D memories that can be created by the masses. But how long will it take to have a proper 3D scan of your wedding guests? Guess it´s back to square one of photography… where exposure times are minute long and people have to stand still again. But, what the heck! Count me in! :-)