The Future of Everything in MR @ Digility

Day 2 and the future

Hey guys, time to continue with my recap! Day two focussed more on start-ups, investments in VR/AR and on the future vision – how does it all continue?

The start-up track had many interesting presentations and discussions, but I won´t go into all details here. Especially since most of it tackled only VR – although many squeezed an “and AR” into their powerpoint titles. The overall message was that you must start with VR today to learn, that it will have great impact and that it can improve in all areas (efficiency at work, entertainment at home, deeper connection in journalistic documentary work, etc). To summarize this business view on the world, let me quote Wolfgang Stelzle from Re´flekt:

AR and VR is still in its infancy. The majority of companies´s [activities in AR/VR] are pre-product. So, it does not matter how you start – but please start now!

Everybody agreed on this during the day and the feeling was (again) that augmented reality will be bigger in corporate environments: managers don´t make a fool of themselves (you still act in your real world along) and existing tasks and product views can be augmented. The right integration of AR is crucial and big companies start doing so (e.g. ThyssenKrupp just ordered 24.000 Hololenses, as rumoured by Stelzle).

In the year 2525…

But where could this lead? What will happen? The main track of day 2 was looking deeper into these questions. It started off in the morning with IT guru Robert “The Scobleizer” Scoble and his vision.

To step us through that he was basically rounding up all cool demos (by others) that float around in the web right now. Be it the ILM xLAB example on Star Wars on Magic Leap, the facebook social VR experience with your virtual selfie stick, the Dokodemo Hololens video or Project Arena for sports in VR.

scoble

Robert says that we are in the 4th transformation in computing and that AR and AI will change everything. 1st, stepping into PCs, then mobile, touch interfaces and now VR and AR. We should have VR in our homes today to understand what will be coming. In 2-4 years AR should become very important and that within the next 6 years it will be totally mainstream – “the tech is coming”.
For him, Mixed Reality is the combination of six technologies that need to be tackled (optics, fast uplinks, eye & room tracking, spatial audio and categorization). Categorization? To recognize the objects around you and to identify these. What are these objects? What do they do? 3D image recognition plus artificial intelligence will do the major trick for big AR scenarios. Developing bigger solutions will only scale up with AI in the back.

But what about current devices? Bulky headset will be a thing of the past very soon, Robert claims. With the new Snapchat goggles in the background he states:

Tech in glasses will be as small as a sugar cube within 2 years! The glasses will get cheap and everybody will wear them all the time. 5 years from now your entire life and world is gonna be flipped! It´s gonna be an incredible time.

Scared? Let´s not be. Can´t wait to design this, guys, happy to be part of this new era – I`d say!

Losing my VRginity

Prof. Dr. Frank Steinicke from the University of Hamburg continued and went a bit backwards in the history of VR. The first VR wave in the 90s failed. The tech was just not ready: he shows us some old HMDs and locomotion devices – I remember trying that thing during CeBIT´96, it was really crappy! But a second reason could have been that computers became a boy´s toy in the 80s (we had the first wave of nerd series on TV back then – think of Riptide!). This social change and the (less socially centered) recognition of tech might have also hindered a flourishing, so Steinicke.

But now with all smartphone tech and computing power we just might make it this time! In 15 years computing power will be 1000 x faster than today (if Moore continues) and the uncanny valley should be done for (see the evolution of Lara).

If the immersion and representation is that perfect, we might be able to create empathy through AR and VR – and have the biggest impact on society then! The same idea was transported during the panel afterwards.

A world like Snowcrash… without the weird stuff

We should use the tech as an empathy machine. To connect with others and to create emotions. We should use it for the better. Often quoted great classics as the book Snowcrash or the younger Ready Player One could become a reality sooner than we think. We just need to avoid the weird stuff! People will be spending too much time in VR and AR. It will be a problem. Robert adds:

VR is ten times more effective than morphine. It´s a new drug!

Being so powerful, VR will play a key role in teaching and we must therefore watch out to get it right for the next generation. People will be crying in their headsets. But also productivity will go up once everything works untethered and with a higher resolution. No more screens will cluster our desks. All virtual screens will clean up the place. Tools like Tilt Brush are a great example of the new upcoming things: you just cannot do that without AR and VR. Here the new tech creates something entirely new. This is the direction we need to be going!

The Future of Everything

So, let´s use the tech to be more productive, more entertained, but even more to create empathy and to connect with others. How much time will it be in VR and AR?

Ela Darling talked about her virtual reality porn and adult business here. VR can enable your fantasies – you could live your dreams that are not possible in real life. You can be yourself and be genuine. You have the chance to really connect to someone. Both can walk away satisfied – be it physically or in a therapeutic way. (Ela says, the business is 20% sex and 80% therapy.)

The closing panel continued to dream about the best use of the technology – without talking about the technology itself. It was moderated by wonderful Monika Bielskyte (who did a great calm metaverse poem before). Howard Goldkrand (SapientNitro), Marshall Millett (Aemass) and Alysha Naples joined the great philosophical campfire chat. Let me only quote very briefly, though unfair.

magick

We must use the great opportunity we have today to let people understand and connect. Let´s not wait for the technology to be perfect until we can start thinking about these social questions. Let´s rather do little steps today and trigger small changes. Let´s go beyond borders and start something new. Completely new. Marshall:

If something like a zombie shooter game is the killer app for AR and VR, we have a problem!

Howard adds that we should disrupt the current state and that we should “be an amazing glitch in the system”. We can step out of our comfort environment into new worlds. Alysha:

We can create a landscape that gives you the feeling of being 100% out of your known system. If you can do this – why create a shooting game? Let´s create wonder with this!

Let´s create mixed reality experiences to let plurality exist, Monika adds. Let´s envoke tolerance and curiosity. Let´s use VR to help us make better decicions in life!

allfuture

The nice wrap-up and look into the future went on like this for a longer time and was a perfect final moment for the conference. How will the metaverse grow? Will it flourish uncontrolled or proliferate? How can we use the tech for best? Could we go for a “What-if”-machine with AI to visualize alternate versions of reality? How can it help humanity?

Conclusion

Well, not so easy to have a simple tweet-length conclusion here. The conference sure touched both sides: the technology/business and the social implications it might all have. Chapeau, for this, Digility! On the tech side it would have been interesting to see more behind-closed-doors demos in AR (come on, every big company has a Hololens right now!) to get updated in a bunch. Probably this will be due next year at Digility then. For now, it could have been a bit more in AR in practical use cases. But the abstract discussions touched nicely both – AR and VR.

Well, today, it felt good to briefly quote more on the social impact side of all this. We might be at a dawn of a new computing age that will change the world like the industrial revolution or later the internet!

So, let´s take our time, slow down and create something great. After all, it is in our own human engineering and artisty hands to do so! :-)

End of file.

Think beyond Screens – Creating Worlds in AR/VR @ Digility

Day 1, Part 2

Hi guys, as said I wanted to continue my report on the Digility event in Cologne last week. It´s not my job to summarize everything (the hosts would sue me giving away all info for free), but I`d love to continue giving an overview of what happened on day 1 during the development track. It got a bit more technical at first – talking about e.g. performance by nvidia, hardware (GoPro), indoor navigation with Project Tango (“Google Tango will be a gamechanger for indoor navigation, google wants to scan the whole world … and they want (our) Data”) or even how to orchestrate 360 sound for VR (Berlin Phil Media). But then it jumped more towards content, use cases and storytelling and the meta level of all things.

Boot Camp for Reality

As a bridge to it, I want to briefly mention the talk from Clemens Conrad from Vectorform. He started by pointing out to a classic quote from Benjamin Franklin, but if you haven´t heard it, it´s a good jump into VR and the question why VR can help us out. Why should I learn within VR?

“Tell me and I forget, teach me and I may remember, involve me and I learn.”

They build a virtual training software and ecosystem to let people learn better and faster. They want to increase efficiency and satisfaction with less errors or casualties (e.g. for jobs with high consequences – being very dangerous). Here it makes total sense to spend some time in a (safe) place. E.g. another company SoftVR showed a multi-user training scenario in VR where you needed to work in a radioactive power plant. Training the right steps can be very critical. It´s better to train it deeply in a boot camp first for sure.

Doing user tracking and analytics was shown as a great tool to optimize the training results. Did the user bend over too much? Can he or she reach the handle above the head? VR matches here perfectly since you must move naturally through space. Training can be customized personally – if a user performs badly the system could give more hints – and less hints for the pro`s.

Close The Gap

John Peeters from Holition showed more specific AR examples out on the street and talked about their experiences so far. He sees digital tech as a good tool to connect people, but at the same time it happens that people don´t want to use tech in public. People are afraid of looking stupid.

mcqueen_media_grid_6

Small working examples are digital try-ons with an augmented mirror (jewelry, watches, …), but there are also big fails in this field. If the digital prop jumps around or looks crappy you should have rather not used AR to show it. The digital add-on needs to be relevant for the store and come with great quality. Otherwise it`s better to stick with this:

Don´t always force yourself to use the latest technology. Try old technology with a digital twist! Never start thinking about the technology. Choose the right tech for the right job!

E.g. they did not augment a full jacket to the person in front of the mirror but rather used one physical jacket that could only be adjusted in color (making it easier to try on different versions with a single gesture).

He proclaims that AR will be big in the beauty sector. Digital make-up, make-up transfer previews from friends or celebrity photos onto your own face – or (awesome idea, I`d say) digitally augmented make-up tutorials (Hey, youtube-lady Dagi Bee, go AR make-up now)! Go beyond the glass frame of your analogue mirror!

“Remove the glass ceiling”

So, time to go meta. Enough technology! What can and should we really do with all the tech? How should we do it? Ola Björling from the MediaMonks gives us an idea how to approach VR by thinking about human perception – and how to hack it. Typically, a story in, for example, a book will only come to life in your brain. You visualize it with your imagination. It´s not that direct. Even TV shows or movies need to bridge the gap between the rectangle (frame of the TV set) and us on the couch:

VR and AR now hold the potential of taking out the translation and interpretation of the presented information. It becomes a direct sensory experience! VR will not be at its peak when we figure out how to film 360 – we must look beyond. We must start all over again to reach its full potential!

Ola states that we must raise the quality bar dramatically for mixed reality experiences to keep people convinced: “The uncanny valley is extremely deep in VR”. To reach it, everything matters. If haptics are involved they must work flawlessly (touch a reel steering wheel of a car in VR must work by the millimetre). If music and sound is there (they should be) it can become the deal breaker or the one feature that sells it. Music can be the “hotline to our emotions” and can create moods easily. Optimized spatial sound will help out in a subtle way and must play a key role! With all, we must “aim higher to reach deeper”.

The 4th wall is down

Astrid Kahmke from the Bavarian Film Center shares this approach and talked about the design of the content and the stories: Presence is key for VR magic!

If quality is high enough (visually, sensory inputs – but also content) we can get immersed and convinced. The 4th wall is down and there is no distance between the narration and the consumer (=actor) anymore. You are active in the story.
The classical Campbell hero´s journey with it´s arc of suspense in story telling becomes the user´s journey. New spatial narratives will push storytelling forward once again. Storytelling will and must change. People can interact even further (than in “normal” games for instance) and the designers need to design their worlds and stories in a way that we can connect with the protagonist (= ourselves) to believe the story and to get emotionally attached. Also, people tend to enjoy their freedom in VR – let them run free. Don´t manipulate them. Leave space to relax and enjoy. Storytelling needs to relearn its toolset!

Though Astrid loves VR and AR storytelling, she reminds us, to not blindly fall for it. Does my story make sense in VR? Does it really fit? It´s the moment of truth. We should use the right media for the right story. Rather use a screen or pages for your story, if the result can be better. Your story counts, not the technology!

I`m afraid of Augmenticans

Speaking about what counts: Alysha Naples entered the stage again and reminded us once more what matters most. (Again, some moments just don´t work in written summary or even in a virtual reality. But let´s try.) Will we go beyond screens? Probably yes. What happens along the way? Will we get it right? Will we adopt?

Alysha repeats once more that humans don´t change. Technology around us does. But humans still live a physical life, have a family, relations, they are eager to learn, to share, to play and to explore the world.

Mankind is (unfortunately hopefully) to stay. Technology changes over the years and should work to our best and meet our needs. Looking at today´s approaches we always see a screen in front of us. Let it be a rectangle or a stereoscopic panel. Let it be video-see-through or an optical see-through overlay – we still observe and experience the world through a mediator. Today the mediator can be easily identified – a TV, phone or computer screen – and can be easily turned off (see illustrations picked by me).

smashtv

But once we have the filtered reality through glasses or contact lenses it might become hard to escape. We have big potential to get it wrong! – I would also add that it becomes very dangerous and we are likely to turn into zombies that are even more dependent on the big corporate input from the glasses/eyephone manufacturer. The closed-system mixed realities will steer us into their ecosystem to consume – if no open metaverse evolves. (Well, let´s keep that for another day.)

futurama_06_0603_act2

Alysha warns us to not only think about the wearer or user of AR glasses. We must not think hard about the cool and crazy content for him or her – but we need to think about the non-wearers, the have-not´s. What´s with the other people in the room? How does AR and spatial computing affect society? Will we turn again into glassholes? Will we be running into a digital divide with one half being afraid of the augmented people? Will they have a good reason to be?

We don´t want a hyper-reality as Keiichi described it multiple times in his videos. We are smarter than that! We will be clever enough to make it right! We are awesome! Oh, really? A good example is the image below to rethink our arrogance:

giant-traffic-jam

The inventors of the automobile surely didn´t plan for this to happen. They wanted to create save, cheap and fast transportation and democratic access to it. Not so sure anymore, that we will make it, right? So let´s think about AR and VR for a bit:

What we create today will affect the future long-term! We must slow down and think about our decisions!

So, how to get it right? We need to ask the right questions. Simple example: don´t ask “How can I type in VR in mid-air?”, but rather ask “Does typing in the air make sense?” Alysha wanted to throw the Hololens out of the window when she was forced to enter a WiFi password in mid air (I can totally relate). We need to focus on the essence of things. When I want to text in AR/VR – what do I really want? I want to communicate asynchronously. I don´t necessarily want to enter text in mid-air. We must go abstract from known approaches. We must go meta. Let´s make room for the new and use the tremendous opportunity that lies before us today. Let´s not do a twitter client for our eyeballs, please!

Important aspects need to get addressed by us today. Will people fall into the virtual world? Will they fall into the screen (well, without the screen boarders)? The more you belong into the screen space communities – the less you belong into your physical space real life community. So, where do we want to belong? Probably both worlds. So, Alysha:

To belong, you must be who you truly are!

Let´s keep this in mind, let´s not get wild about technology. Let´s take a step back and use it to our advantage. To realize our fantasies, to reach our true goals, to create great tools, to connect.

Today we are at the crossroads of things. Let´s not make the wrong choice. Let´s not rush into the prettier looking green forest. Maybe we need to take the detour – the hard way – to get things right for mankind.

… To be continued next days…

Diving into Digital Realities @ Digility AR/VR Conference 2016 in Cologne

Recap Day 1, Part 1

Wow! Two exciting days are over at Digility conference in Cologne. Two days fully crammed with a lot of interesting talks, demos, panel discussions, technology, visions for the future, awesome ideas and fantasies and fantastic people! Now I´ll start my poor attempt of transcribing and recording a glimpse of it and to transport a bit of the people´s ideas to you!

The conference was split into six pieces basically: workshops on both days, the main track “brand experience & best practices” on day 1, “developing for VR & AR” on day 1, the future vision for mixed reality on day 2, the 2nd track about start-ups and investments on day 2 plus demos from the partners. Today, let me start to talk about the main track from day 1 and let´s dive into the introduction and the best practice reports from the industry. I won´t cover all talks, but rather give a good idea of the thoughts on a red line.

A new hope, a new conference

Katharina Hamma from Koelnmesse introduced the show in the beginning. For those who hear about Digility for the first time: this is a new conference, hold the first time. It was co-located to photokina this year in Cologne´s huge fair center and (nobody paid me to say so) professionally organized and executed with about 1000 guests. They (obviously) focus on AR and VR and the upcoming industry around it – to learn about the technology, the stories and to connect with other spirits in this field to push development in it and discuss social impact of (my) favourite technologies. That´s why I became official media partner to get you guys a report, too. (See my live tweets for more.)

The braveheart speech to kick things off

alysha

Alysha Naples stood up (though still too early in the morning) and set the scene for the two days. Let´s go wild and talk about the future – no wait! Let´s first make a reality check! Human beings stay the same, we have the same needs. Although the mode or physical form may vary between decades: people pack the same stuff they used to pack ages ago (only changing the mode from physical paper book to ebook for instance). People are just physical. When we move half our stuff into the digital space (onto our phones in 2016) they leave the physical space – but still exist. Now the problem comes along: when we do work on our phone or computer we leave the physical space (well, ok, our brain does) and focus on the screen space (and its virtual reality inside). This mode switch between the two spaces is key and a major problem! This concept or trying to avoid this switch holds huge challenges – but also chances – for mankind … and cannot be overestimated. How can this switch happen with glasses? What if I keep seeing the real world along? What if it´s all VR´ed? How do we need to design interaction? We must not carry over old metaphors to a new media (Did you ever catch yourself trying to pinch-zoom or tap-select a line in a physical book thinking about your ereader? Bingo!)! We instead need to create something completely new for the new devices! We must remove the frame that separates us from the data on the screen! We don´t need the screens anymore! Let´s go inbetween! It´s time for something completely new! Break the screen that seperates both worlds. (Time to look at A-ha below and start singing…)

a-ha-take-on-me

Grounded Technology

Gotcha. That´s the plan, Alysha! Sounds awesome! So, what do we have today out on the streets? Audi`s Marcus Kühne kicked off the best practice examples giving insights on the past to future of the sales process at Audi. Again, people stay the same! The sales process did not change over the last 60 years basically (going to a dealership with your wife, take a look around, see a few real cars, get talked into a car, make a test drive, …). But the new situation in the 21st century is: the car (or any product) has become far too complex to grasp everything. Too many configurations are impossible to be presented on a limited space, too much technology screams to get explained. This is the moment where VR (and AR, too) can stand up and shine! Help us, awesome mixed reality continuum!
Easier said than done… and here we dive into the technical issues. One example: the construction data (CAD-data) of the cars have more than 50 mio. polygons, but they must get shrunk down to max. 5-7 mio. polys, so Marcus said (typically game engines stay even below that of 1-2 mio.). Their attempts started with 50 fps and 45 ms latency (well, quite nice, but just not enough for VR). In 2016 they reached 90+ fps and a 20 ms latency (quite allright for VR) with their partner Zerolight (I mentioned it before).

audimoon

But besides the tech there is more to it! You will run into conceptual challenges. If you want to sell your car with VR you can and must engage the user more. So, Marcus:

People get excited when you let them do impossible things!

You can trigger more emotional engagement when doing the impossible (like flying you to the moon) and the unexpected. Hence you should do it! Also, people love to interact themselves. Let them run free and create your demo or experience this way! Make it easy and don´t limit the user to a path on a string. Don´t go the standard sales tour! Once again, exploring and engagement is key to success. (By the way: check out our (German) podcast VRODO TALK we had with Marcus a few weeks ago if you are really into it!)

More lessons learned

Dirk Christoph from innoactive continued and hopped onto the stage to show us their take-on with VR for the Media Saturn kitchen configurator that can be found in German stores. They stumbled upon many challenges as well and talked about their issues when building up a VR Point of Sales system (PoS) as a software company (coding nerds don´t like hardware problems, I`d say). Good convincing content – you want to sell the kitchen in the end – with a flawless user interface for first-time users needed to be found. So, Dirk shared their lessons learned with us.

vr-kche-innoactive-04

A user survey revealed nicely that 88% of the people were (in general) comfortable wearing a head-mounted display and that 73,5% percent could imagine buying products directly in VR, while less than every tenth user got some motion sickness. 88% would even love to design their own virtual environments (though we don´t know if the survey was representative or if mainly nerds or artists were asked). Overall the presented realism, latency and interface worked well for most, but at the same time people complained about the bad resolution (27%), the weight and (un)fit of the goggles, the cable and the sweat-factor.

Again, engaging the people is key, Dirk confirms following up on Marcus´s talk. You need to make the experience fun and easy (don´t overdo it with complexity). Possibly even integrate small mini games to call to the inner child of the users. E.g. let them chop the cucumber on that kitchen table – after all it´s a kitchen you are trying to sell. Even though this does not have a direct sales piece, one could say. I believe it actually does: is the table high enough, do I feel comfortable cutting the cucumber here next to the air vent of the oven, etc. People just like to be people and act normal – also in a virtual world.

Speaking of acting normal…

How do you act normal in a windowless capsule while being shot with 700 miles per hour through a message-in-a-bottle tube? You get nervous? I bet I would. You are claustrophobic? You go crazy? Probably. Let´s avoid it, please! Guys, come up here!

hyperloop

So, Dirk Schart and Wolfgang Stelzle from Re´flekt explain how they augment the windows from the hyperloop transportation system from Elon Musk with their approach. Windows? Didn´t I just say there are none? Right. The windows inside the capsules are just window frames with a 4k OLED screen behind it, mimicking windows. People just like to be people and to live in a real world where they know what´s going on. Again, Alysha´s proclaimed switching of spaces is a problem. Is the real world right and in focus or is it the screen window? Does it fit together in our brain? The user needs to have at least the feeling of knowing what´s going on. The same applies for the hyperloop: a feeling of movement, speed and the environment will help us to stay sane (No windows and no beer make Homer go crazy – I`d say).

arwindow

The technology needed for this to happen is (in this case) a regular screen plus a RealSense depth camera to track your head´s position and viewing direction. With the calculated position you can set the virtual camera for the rendered environment presented on the screen = window in real-time. Currently the user closest to the window will get the right perspective – but future updates might result in multi-viewer experiences for a single window screen.

Problem solved of going crazy! But travelling would still suck the same way it does today. We can do more to make it more interesting and entertaining, Dirk continues. Exchange the environment from a real world e.g. California landscape to something fantastic (I´d say use the Snowpiercer landscape! (Worst-movie-ever! – though I love Tilda)). Obviously you could also show additional travel information, a movie, a game or personalized (I heard it coming) ads (d´oh).

But honestly, this is not really AR to me. (Though Re´flekt sure knows what true AR is!) Here it´s just a good real-time head-tracking problem to present an interactive shop window screen. But, hey, it´s a first great step and I´m sure there will come more…

…maybe we will be able to see this technology on transparent windows in the German trains soon! (Re´flekt is evaluating ideas with the Deutsche Bahn and their Innovation Train.) So, we can see: more players are jumping on-board of the VR bandwagon. A good time to jump right into the panel discussion in the afternoon where the discussion on the lessons learned continued.

Don´t wait too long. Start today!

In the panel we had Audi, Ola Björling from the MediaMonks and Sven von Aschwege (Deutsche Telekom) with Alissia Iljaitsch moderating. The tenor was the same with all of them: today VR might be an early adopter advantage, but soon enough it will be an established (and expected) technology. You will fall behind if not taking part now. Better start today with your first prototypes and learn over the next one or two years. But if you don`t: bye-bye! In 5 years you must have VR, Marcus said. Ola added:

VR is the biggest blank canvas ever! Go creative!

Creativity gets even bigger in VR. You can do more with it. But you also must do more than using the technology. Reach the people emotionally (once the hype is gone the tech and funky effects will rather be in the way than help) with great content. Once again the buzz-word bingo wins while being true: content is king! (Keep this in mind for my upcoming recaps about Digility.)

Well, ok, let´s stop here for today with this classic (but true) phrase! As you might have noticed the industry talked mostly about VR on the first day here. You could also see that represented on the demo floor. Countless Vives, Gears and Rifts were shown, but only a single Hololens found it´s way to the conference (afaik). But the queue confirmed huge interest and there will be more coming up on AR. AR will get even bigger than VR. It will more dramatically impact business and our society… We´ll get there soon enough (I mean in one of my next posts, but hopefully / I`m afraid possibly in real life, too)!

To be continued (very soon) …

Rugged AR for the Industry

Let´s talk about head mounted tablets, allright? What? Tablets? Yep, you read it right. What´s that and where do you use it in heave rugged outdoor industry scenarios? Let´s find out.

I had an interview with Andy Lowery from realwear – actually I had the first interview since they went public with their upcoming device and the press information! Today, I`d love to share the long talk we had about AR, how the information revolution will affect our lives, their specific hardware and plans of course and the advantages of head-mounted systems. I also had the chance to try out the latest prototype device. But since the interview was just too long to quote it 1:1, let me sum it up for your convenience.

Andy Lowery is surely no noob to the AR scene and the industry. If you work in this realm you must know that he used to be the president of DAQRI who also produce smart helmets but with a different focus according to Andy. Before that, he was Chief Engineer at Raytheon (on electronic warfare) and way back he came from the US Navy being a nuclear surface warfare officer.

During the time at DAQRI he was pushing industrial AR, also working with partners like metaio. The idea came up to fix AR technology to any worker`s hard helmet in the field where needed. Since people wear the helmets anyway it would just be a winning combination and logical step to add the technology that can be easily put on and off along. Since DAQRI was going for another roadmap, he founded realwear to push into this direction. Chris Parkinson from Kopin (who produced the Golden-i werable system) joined forces and they are getting closer to their release. Time to take a look at the device!

What can it do for whom?

Andy described the history of the development and that in the environments of their clients listed a few special requirements. Other smart glass competitors (like Vuzix) might not comply to these, e.g. in the oil, gas and mining business (out in the field) they need to come up with a ruggedized, dustproof, waterproof – and sometimes even fire or explosion withstanding – design. Realwear describes the system as follows:

“Featuring an intuitive, 100 percent hands-free interface, our forthcoming RealWear HMT-1 brings remote video collaboration, technical documentation, industrial IoT data visualization, assembly and maintenance instructions and streamlined inspections — right to the eyes and ears of workers in harsh and loud field and manufacturing environments.”

They were thinking about the design “what would I do with an industrial device? what would it look like?” and two major aspects are key: it needs to be hands-free (people are working while using the system) and non-intrusive (safety reasons). Andy said people in the field just reject gadgets where you need to use your hands on glossy touch screens (“This is ridicoulus!”). So the complete system is speech-controlled and can be pulled out of your field of view with one move.

realwear_portrait

Let´s do the Live Demo

So, I could try on the latest design. The first impression was that you don´t really notice the weight. It´s as comfortable as it can get with a hard helmet on. The video screen arm can be easily adjusted so that you have it directly in front of your eye (either left or right) or in a peripheral position to only look at the screen while looking down (to have a clear sight to the front). Image quality and brightness looked good, too. Though I had to find a sweetspot distance and angle first to be comfortable for the longer session.

Then I was looking around the menu and could trigger all commands through speech: open a document, change zoom level, close a window, play a video, open a report, write a report, take a photo, etc. Recognition of these fixed keywords was stable and only got triggered by me (not by the others in the room saying the same phrases). The given tasks worked flawlessly and some small but helpful features make the interaction easier. For example, you can either zoom and pan a document (e.g. a circuit board layout) by speech or alternatively activate a mode to virtually fix the document in the air and change the presented part by moving your head.

Speech recognition is bigger when connected to the cloud so that you can use natural language to write reports by voice, it is restricted to the key words when working offline (currently).

The overlay happens only in the small screen and you get head tracking by gyro and compass. GPS will give your position and the camera can do additional vision based tracking. But there is currently no “immersive AR” as Andy names it: no accurate overlay of information is present today – but can be in the future if the market needs it.

So, he could not show all features since the system was not hooked up to the cloud and company data, but we then talked more about the fields of usage.

Use Cases and Advantages

So as said, they target industry like the oil, gas and mining market, where staff would use their systems on oil rigs, oil platforms or in dangerous spaces. People get instructions, measurement data or blueprints presented. A remote helper could connect to them via telepresence and communicate with the user to support the current task (also adding drawings or markers into the field of view from a distance to point to the right spot). Training scenarios keep being important, too.

For trainings Andy mentioned different studies that showed the advantage of an AR-supported training. E.g. Boing did a survey, where 50 students needed to buid an aircraft wing out of dozens of parts within roundabout 30 minutes. They were untrained and never did this task before. Three groups tried three different approaches: 1) using desktop instructions, 2) using hand-held tablet instructions and 3) using hand-held instructions via a tablet but with an AR view right at the object of interest. The results showed a clear improvement in speed and error-free work with AR. With AR the task was finished 47% faster and only caused 1,5 errors instead of 8 errors average. Other studies even showed a training result with AR that was comparable to an “old school” training with a personal human tutor (and crushing paper instructions totally). These promising results still used a hand-held tablet – the numbers would, so Andy, go up even more when being hands-free.

We talked about other use cases, too. These could be homeland security or the police officer – supporting their tasks with facial recognition (check for registered bad guys) or license plate checks on the go. In general connecting the system to the cloud and big data in the background could dramatically change our digitally enhanced working life. But what is crucial? The interface. Andy stated:

“The 21st century user interface does not have menus, file structures and all that stuff. It knows what you are looking it, knows where you are, knows what you are about to do.”

Systems like amazon`s Alexa or Siri, any intelligent device that has enough information – best including spatial awareness will predict your actions and help you out just in time. Talking about your industrial working day the system will also know your current assignment and role and give best matching information accordingly. Systems liek SAP, Thingworx, etc. will be able to connect themselves through the SDK to get this vision working.

With wearables that react to your point in space and time and your current activity an information revolution will happen, Andy assures. It will take (much) more time, but it will happen and be a big game changer – comparable with the dawn of electricity (bringing power tools to the masses starting the industrial revolution and democratising the technology).

Head Mounted Tablet – The Specs & Software

The details on the specs can be found on their page. The system runs on a regular Android – using tablet technology in the end, hence the name. A lot of software has been developed in the past for (rugged) tablets for outdoor usage – these can be easily shifted over to the wearable. The device comes with a battery life of 6-12 hours and has the option of hot swapping batteries during run-time without losing up-time. The camera holds a 16 megapixel chip and stabilizes the image actively. It comes with all the typical sensors in a rugged design.

Getting it on the road – My conclusion

Well, the final design is not available today. So, I couldn´t really give a verdict on the upcoming HMT. But if you are interested: they will start a “pinoneer program” to let you take part in the beta and get the first wave of the device. Final shipping of the device is planned for summer 2017 for $950.

So for now I can only say that the current design already feels lightweight enough for a full day and the complete speech control makes perfect sense in the given environment and worked well in the demo. Connectivity could not be tested, but I can imagine with a regular 4G or 5G uplink you will be able to get your work up and down. It would have been nice to see more real life demo scenarios to judge about the workflow and usability.

AR in this device is no funky AR as we love to see it – and expect from Magic Leap or a consumer Hololens. It just gives you a video screen plus overlaid information on your camera feed displayed on your mono screen. But it feels like a realistic, down-to-earth 2016 use case for the technology for that field. “It gets the job done.” and improves already alot – compared to traditional systems (paper manuals, phone calls, fly in the technician instead of his/her telepresence). We can imagine how things will get even more exciting once we have perfect AR overlays in it. A glimpse of more AR in industrial scenarios as described above can now also be seen in a new Hololens video vom Thyssenkrupp. Although, the Microsoft design is obviously not rugged at all, it gives you an idea.


Banner photo (C) RealWear, Portrait photo (C) Tobias Kammann

“We believe in the incredible potential of AR”

During the last two days, AR and VR professionals from the industry gathered in Munich to present and discuss current state of the art in the business of mixed reality in their fields. Attendees had the chance to learn the from the pros that use this technology from the very beginning.

The VDI-Wissensforum organizes (German-speaking) conferences and trainings for engineers and technical management. This conference was titled “AR and VR as smart-assistances – virtual technologies in industrial applications”.


Industrial Mixed Reality History


Typically the biggest industrial players in Germany are the automotive companies plus some other heavy industry as aviation. The industry here had the money to research on AR and VR long time ago (and that´s why I also joined into that circle a long time ago – to be able to research professionally in mixed reality). They have very high costs and big pain points to address. For example, an Airbus is never coming from a standardized production line – every single one is unique. There is no prototype. Basically, every client also wants it customized, too! So, you can imagine how expensive planning errors or late adjustments would get. Here you just need as much work done in bits and bytes – before you spend money on physical productions!

Therefore the German engineering started off with AR/VR very early. Many government-funded research projects were sponsored as well (like ARVIKA, ARVIDA, AVILUS, …) and are still running. Germany`s AR company metaio (now Apple) was also spun off by one of these initiatives in the very beginning. The industry joined forces very early to push technology here.

vdi-event-2016

So, during this two day event companies like Audi, Opel, Airbus, Lufthansa, MAN, Porsche or Bosch presented their works (see full schedule and listing here). Research institutes like Fraunhofer IGD showed off their latest progress in tracking technology. Well-known AR-company Re´flekt hosted the event (thanks, guys! Great show!).

The industry use cases differ a lot from what VR enthusiasts think of when they talk about Riftviveplaystationgear-VR these days. Therefore I wanted to summarize what the industry does with AR and VR today (well, they do more, but let´s keep it at this today – I will only cover parts as examples):


Industrial Use Cases for Mixed Reality


Training & Maintenance

vwovo

To start off, you need to train your own people. AR/VR is being used to train engineers, e.g. if a new model of a car is launched and the staff needs to know about all parts, screws, how to assemble it, how to repair it. Typically a professional training can last 2 hours and motivation might be limited when browsing through paper descriptions or powerpoints. VW developed a new training tool that uses AR glasses for the trainee (and a supervisor tablet for the trainer). The trainee gets the next step or object to work with projected into his or her glasses and gets a correct overlay of where to perform the next action (like fixing a screw). The trainer has control over the activity and can change difficulty levels (probably give more or less hints). Their results showed that training time went down significantly and the cost of transfer of learning is reduced – making it all more efficient and less cumbersome. Trainees can directly work at the object – without twisting their head to a second screen or poster. They can fully focus on their primary activity. Easy multi-user and glasses-free operation is also being tested with a projection-based system.

Maintenance can work the same way. The engineer gets step-by-step instructions projected to the glasses and additionaly needed information (measured values from the car) can be presented – without the need for a second screen somewhere behind the car. Audi showed their well-known and constantly updated eKurzinfo (we covered it ages ago). It lets end users experience an augmented manual for their new car and also get maintenance instructions. The user can for example see where to fill in the cooling liquid (an AR arrow pointing to the right spot when the smartphone camera is pointing at the engine).

Audi mentioned their analytics results and that many customers just don´t know what their car (or any product) is capable of doing. An augmented view can help to learn more about your product. The client demands just in time and just in place information – not browsing for information in a manual in your favorite armchair at home. 83% of all innovation are missed by the client (“rollt vorbei”) because they just don´t know it´s there. AR/VR is supposed to help here as well.


Planning & Analysis

Especially virtual reality became a standard tool for initial planning and car development. You design and build your hardware in 3D and are enabled to see problems early or have faster reviews and much shorter development cycles – without spending money yet on physical models. This goes from the digital mock-up (DMU) to final CAD model that will be build. Interestingly, Opel stated, that most VR work is still being done in CAVEs and COVEs to present the results. VR-HMDs are not wide-spread yet: the quality is too poor and more importantly: you need to socially interact with many people during a review. A blocked vision, fully immersed, will not work – until everybody shares a common VR space that works flawlessly.

vdi_analyse

Augmented Reality on the other hand is often used during car development integration tasks on an every-day basis. Engineers will use AR to augment the physical mock-ups (PMUs) with extra parts or use AR for guidance how to assemble additional physical parts on the overall (physical) model. You can get proof if the car can be assembled at all and get support during the task itself. When problems need to be solved, AR will help. E.g. a tube does not fit into the engine space, though it did in the DMU. Then an AR view will help to identify differences between real and virtual (“Soll-Ist-Vergleich”) – between planned and actual (see image). You can see the deviation from the 3D overlay towards the real tube. Easy measurements and comparision tasks like this can already help you out a lot. This is just one simple example, but it scales up a lot and enables discussions and decisions way quicker.


Virtual Design Reviews

a359-c
Image no confirmed renderings from Airbus, image taken from AirlineReporter.com

As already mentioned during the introduction and the Airbus example, VR is typically used to previs any product yet to produce. Especially for expensive unique productions (like an Airbus) it´s worth investing the time into a virtual representation, that not only looks technically correct – but also visually appealing to show it to designers in-house but also to the client before anything got build. You try to get feedback and clearance from the client as early as possible. Walking through the product in VR obviously helps to seal the deal and avoid surprises.


Exhibition

During the conference you could also see different exhibitors presenting their technology. Among them was Extend3D with their professional projection based AR system. I`ve reported on it multiple times before. A rather new use case they have established includes projecting the future to-bre-created design on the exteriour of planes, boats, ferries and yachts. The workers who do the paintjob of the custom design use it as a stencil to color the complete exteriour. Before using projected AR it took many more hours and hardware effort to create a convincing design. A simple but great use case for projected augmented reality from Extend3D!

arhmdpool

One more thing to mention that might catch your interest if you work in the industry. My partners from AugmentedRealityExperts started and presented a new program for Mixed Reality HMDs. Currently many different brands throw new HMDs onto the market. Noone really knows if they can keep up their promises. Pretty sure, that some won`t. It´s also pretty sure that you could be stuck with the wrong HMD for your use case. With tight budgets it might become a problem to find out the right one for your scenario without spending a lot of money. This is where the new initiative from AugmentedRealityExperts kicks in: you can join the partner program to get a loaner, tips, demos and the right network to exchange experiences. Check out the campaign that just launched today.


Conclusion and Take-Aways

It became obvious once more that AR and VR are already deeply routed within the industry and that all teams are hoping to get more things done in the mixed reality space. It optimizes efficiency, gets down training times and helps to visualize unbuild objects to discuss, review and redesign. Noone questions the big advantage. Although – especially 2016 – it became also apparent how still lacking standards or a bigger palette of product options often hinders AR/VR. Everybody was moarning about the shutdown of AR company metaio (that historically served as the base for most of AR activities in German industry). Fraunhofer IGD might step up into the light with their upcoming tracking solution focussing on model based tracking and sensor fusion – but it`s not there yet.

Overall the major pain points stay the interoperability and the integration into an existing IT landscape and production pipeline. Plus: a “keep it simple” user experience. You want to cross out the operator. You want to be able to use it yourself and quickly switching between devices and fully integrated with the company`s data. That´s still a challenge that is less funky but today more important than all the AR technology`s problems. Let´s get easy access to the content! As always, content is king or queen!

Meet up with AR/VR pros at Digility conference

Hi everybody,

augmented.org is official media partner of the digility conference in Cologne. We will be on site during the conference days on September, 22nd and 23rd and report on the event for you. But you also get the chance to meet up live at the conference for free!

Digility Conference

The conference is kicking off first time this year, but already has some high class speakers to show. The panels and talks are all about augmented and virtual reality, baby steps, working industrial scenarios, business models and live demos and workshops. All aspects will get covered and we get insight into the work of different German big players like Audi, Vfx-pros Mackevision (my former employer) or research institutes like Fraunhofer.

Digility states as target groups “DIGILITY is the platform, on which people and companies meet, who work on applications and solutions for the interaction between digital and real world. At the DIGILITY conference hard- and software producers, developers, researchers, investors and users come together, to open up the complete width and depth of this topic. At the DIGILITY exposition, the solutions of today and tomorrow become perceptible.”.

With augmented.org you can win your free ticket to the event!

The regular price is 599 Euros – augmented.org is offering two free tickets to the show for our fans and readers. Just send a mail to win@augmented.org and answer this simple question: Since when is augmented.org blogging about Augmented Reality?

Allright? Got it? We will draw the winner on Monday, 12th from all correct answers and publish the winners (unless you don´t want that). Well. hope to meet you at the conference. Make sure to check out the schedule and bring enough time!

Cheers,
TOBY.

Microsoft`s dog Rover is alive in the Mixed Reality OS

Microsoft just presented a new video showing their plans of a mixed reality operating system. I`ve reported on their (concept) plans on it in early July with their marketing designer video. But now it seems we are getting a step closer to reality. Or do we? Microsoft states specific steps and timings!

VR, AR, MR – everything!

In the new video (below) they claim to support all from “VR, AR and MR”. Given the previous selection of terms (Microsoft typically referring to AR as MR) you can ask yourself why they include all three now and what they mean by that. But those are just acronyms. More important is that it works. We are supposed to be able to switch between AR and VR and run all Windows 10 apps – be it 2D or 3D – on the new mixed reality operating system.

The demo shows a woman in a room-scale demo with a controller stick that is tracked in space as well. The HMD she puts on is no known device, nor does it remind us of the Hololens or other Microsoft products. In the demo she runs some windowed applications before she goes on a virtual infotainment journey “HOLOTOUR” to Rome. Take a look:

The demo shows some established concepts, most prominently the virtual desktop with multiple windows. We see it on the Hololens, but also nicely in apps like Envelop VR where you can run all your windows on the Rift. The teleportation to a tourist spot going into an immersive view is also presented nicely.

What I was a little bit afraid of was to see Rover, the old Microsoft dog from “Bob” inside the video. Bob was an additional Microsoft program to provide a more user-friendly interface for Windows. Though the easy metaphors of having a living room, a “physical” calendar on the wall, etc. had some potential – the software failed miserably and become the laughing stock in Bill Gates` history. It´s dog Rover got on our nerves later in the search function of Windows XP. Everybody tried to turn him off as quickly as possible (like Clippy). But dog or no dog – to be fair: their old approach could now really come to life and make sense. Building a more user-friendly interface could finally work in VR and especially AR. The UX people and software engineers in Redmond could finally get their absolution. We got used to using windows, a mouse and abstract symbols to do operations. Now the system could lead back to more natural interaction and concepts for human beings. Critics said, “Bob” would hinder users to learn the Windows concept and that it would be slower rather than faster in using it. Here you could ask: so, why should we learn a windows concept to begin with? Can´t we just act within the virtual world like we do within the reald world? Maybe Microsoft has something up their sleeves to go into the right direction.

bob

But the video itself is a bit low key and simple to be honest. The environment is rather simple, the demo is short and not as highly produced as earlier videos. Also, we don´t see any gesture interaction or multi-user scenarios. Seems like it needed to be finished quickly to have something to show.

They do not show how the tracking works or how the switch to AR would happen. They state to support a “broad range of 6 degrees of freedom devices” and that you can run the Windows Holographic shell and all Windows 10 applications. Release of version 1 of the specs is to be released in December as one developer writes. He also states:

Next year, we will be releasing an update to Windows 10, which will enable mainstream PCs to run the Windows Holographic shell and associated mixed reality and universal Windows applications. The Windows Holographic shell enables an entirely new experience for multi-tasking in mixed reality, blending 2D and 3D apps at the same time, while supporting a broad range of 6 degrees of freedom devices.

So, will we be able to jump into it with Rifts, Vives, METAs and Hololenses at free will? Will it really be ready in 2017 for the consumers as claimed? That would be really quickly!

The Leap of Faith to save the World

Fortune Magazine just interviewed MagicLeap and now published the video. If you have 24 minutes and want to know first-hand how Rony Abovitz (CEO) and Brian Wallace (CMO) tick, I`d say it´s worth to watch the interview with Fortune´s Michal Lev-Ram yourself.

So, what do they reveal? After the initial definition (“mixed reality lightfield”) we get confirmed that some kind of light-weight headset is needed (with Mark I), but that they would rather not call it “glasses” as it is just not comparable to a simple flat screen in front of your eyes. As mixed reality tends to sound hipper than AR these days, they differentiate their system from AR – with AR being the terminator view with a visual layer / HUD instead. (Though I always disagree with this definition – mixed reality being the full continuum including VR.)

They talk a bit about the team that wears their system all-day long – which is definitely marketing but could also support the thought that is really light-weight and you don´t get a sore neck or eyestrain from it. The visual quality is supposed to be as good as in their presented videos. They mention virtual people that get a different (non-realistic) look on purpose to distinguish them from real people. Users had trouble interacting with the real world when virtual objects were in the way (and would cut off the user´s hands for example). This could get really exiting (and the good scary). They chat a bit about the great new Pokémon Go (love Nintendo) release and how it could look like with MagicLeap. Well, it would look like the marketing video (I included just yesterday).

Currently they claim to have 600 people working on the system. The big factory with a fully equiped and production line is running. Or almost (right now “debugging production line”). So, a fuzzy “almost there” to the current status of the release. Regarding the timeframe of the roll-out and productive use in the streets (a question asked from investor Alibaba) they comment that adoption could go faster than anyone thinks. Brian says that within in three years (by 2020) 70% to 80% of the current audience would be wearing a device like the one from MagicLeap. That is really stunningly quick if it where to happen that way!

For the field of applications they give a broad range of possible use cases and fields. While gaming or entertainment could be one of the first for private users (refering e.g. to Lucasfilm/ILM) there are many more. In the end – their words – they do design a whole new contextual computing system and not one application (that´s why they have to spend so much investor money). Basically, all is possible. They imagine typical office applications and everyday stuff to support our daily life. But sectors like health care could be big as well. An app “look-by” lets you auto-scan the products and cloths around you to directly shop them through platforms like amazon Alibaba. In genereal it will yield in tools that raise our productivity, allow for better social interactions or are just fun. But clear words: consumer market first, later B2B – which I think is a bit surprising (will the final device be as cheap as my new high-end smartphone in 2020?).

Their vision (being asked if we soon have empty offices without laptops) is definitely to see all of the computing world via a HMD. With a classic computer or phone you always have screen edges where everything is cut off. With a HMD you have the whole space as your screen -it`s just unlimited. (Already people mimmicked virtual screens for their VR HMDs development as a fun in-between step.) They want to take us away from those dusty screens, go out into the natural world and live and work outside again (like humans used to) – but with the digital information integrated seamlessly.

Hopefully they will offer an open platform for it in the spirit of the old internet. They claim that everybody will be a creator. Your daughter transforms your house into a Unicorn world while you have set up a different version of your augmented physical space, etc. The individual user will be able to create his or her own worlds and objects and share them. Everybody could take part in changing the space we live in. “Human creativity could save the world!

Join the magic force!

ILMxLAB has been working on some cool virtual and augmented reality solutions before (as I reported in January). So, no surprise that they team up further with the big players or new innovators:

Lucasfilm / ILMxLAB and MagicLeap just announced that they are currently working on a cooperation. Their twitter post with a short video hit the web yesterday and we can finally see some more footage of the (possible) MagicLeap technology! It shows R2D2 and C3PO walking through the office building and R2 projecting a hologram to a table. Check it out below:

Again, we don´t know how the HMD (or other) will look like. We only see the shot how you would perceive it. The quality itself is as in previous demos. A holographic look and quite stable tracking, but without any additional interaction or a hint in what state all this is. Is it a quick rough tech demo? Or a useable system where the content can easily be updated and exchanged to do quick previs or real-time storytelling and experiences – focussing on the director only or towards a broader audience to dive into the force in the real world?

Of course, cool to see and it´s making our mouth water, but no real news has been revealed, though. I hope we can expect more next – after we´ve seen the patent showing a possible HMD the speculations are growing!