How VR Can Make the World a Better Place

Virtual reality (VR) can make the impossible possible — the rules of physical reality need no longer apply. In VR, you strap on a special headset and leave the real world behind to enter a virtual one. You can fly like a bird high above Manhattan, or experience the feeling of weightlessness as an astronaut on a spaceship.

VR is reliant upon the illusion of being deeply engrossed in another space and time, far away from your current reality. In a split second you can travel to exotic locales or be on stage at a concert with your favourite musician. Gaming and entertainment are natural fits for VR experiences. A startup called The Void plans to open a set of immersive virtual reality theme parks called Virtual Entertainment Centers, with the first one opening in Pleasant Grove, Utah by June 2016.

This is an exciting time for developers and designers to be defining VR as a new experience medium. However, as the technology improves and consumer hardware and content become available in VR, we must ask: how can this new technology be applied to benefit humanity?

As it turns out, this question is being explored on a few early fronts. For example, SnowWorld, developed at the University of Washington Human Interface Technology (HIT) Lab in 1996 by Hunter Hoffman and David Patterson, was the first immersive VR world designed to reduce pain in adults and children. SnowWorld was specifically developed to help burn patients during wound care. Hoffman explained how VR helps to alleviate pain by distracting patients:

“Pain requires conscious attention. The essence of VR is the illusion users have of going inside the computer-generated environment. Being drawn into another world drains a lot of attentional resources, leaving less attention available to process pain signals.”

However, VR is not all about escapism. The technology is also being used to create simulations that help people confront conditions like phobias and post traumatic stress disorder (PTSD). VR provides a safe, controlled environment for exposure therapy in which patients, guided by a trained therapist, can process fears or trauma memories and practice coping strategies. Although VR therapy has existed for two decades, it has been fairly inaccessible to the larger public. The emergence of affordable consumer hardware like Samsung Gear VR can help patients to finally take advantage of this life-changing technology.

VR has the tremendous ability to change the way we perceive reality. Humanity can also benefit when virtual reality changes how we see each other.

Film director Chris Milk believes VR can create the ultimate empathy machine. In collaboration with the United Nations (UN), Milk and his team went to a Syrian refugee camp in Jordan in December 2014 and shot the story of a 12-year-old girl named Sidra. When you’re inside of the VR headset and watching Clouds Over Sidra, you’re looking around her world, seeing it in full 360 degrees, in all directions. You’re not watching her through a television screen; you’re sitting right there in her room with her as if you were there. It becomes your world, too. When you look down, you see that you’re sitting on the same ground that she is. “Because of that,” Milk said, “you feel her humanity in a deeper way. You empathize with her in a deeper way.”

Clouds Over Sidra was screened at the World Economic Forum in Davos in January 2015 to a group of leaders whose decisions affect the lives of millions. As Milk noted, these are people who might not otherwise be sitting in a tent in a refugee camp in Jordan. Milk is currently working with the UN on a series of these films in multiple countries, including Liberia and India. “We’re showing them to the people who can actually change the lives of the people inside of the films,” he explained.

“Through this machine, we become more compassionate,” Milk said. “We become more empathetic. We become more connected. And, ultimately, we become more human.”  

VR not only presents the opportunity to transport you; it has the power to truly transform you. And therein lies the great potential of VR: to return to reality and effect positive changes that benefit humankind.

This article also appeared on O’Reilly Radar.


PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS


Designing Beyond Screens

Virtual Reality (VR) strives to recreate the physical world in a virtual one. Augmented Reality (AR), on the other hand, can bring the digital into the physical world to create a hybrid reality. AR offers new ways of applying technology to immerse ourselves in our physical reality (rather than being removed from it), and even enhance it.

Interacting with screens is a big part of our everyday modern reality. We spend a great amount of time engaging with our world and each other through two-dimensional screens, whether via a smartphone, tablet, or computer. The world we live in, however, is three-dimensional and not flat: it is physical and involves the use of multiple senses. AR presents the opportunity to design beyond the screens we use today and create new experiences that better embody the full human sensorium.

In my last Radar article, I looked at how AR, wearable tech, and the Internet of Things (IoT) are augmenting the human experience. I highlighted how computer vision and new types of sensors are being combined to change the way we interact with and understand our surroundings. Here, I’ll look at how this can be extended by integrating the human senses beyond the visual — such as touch, taste, and smell — to further augment our reality.

Touching the digital and interacting with data in novel ways

Imagine using your hands to manipulate and pull virtual objects and data directly out of a 2D display and into the 3D world. GHOST (Generic and Highly Organic Shape changing) is a research project across four universities in the United Kingdom, Netherlands, and Denmark exploring shape-changing displays that you can touch and feel. Jason Alexander, one of the researchers from Lancaster University, describes the technology: “Imagine having Google maps on your mobile phone, and when you pulled out your phone, you didn’t just see a flat terrain, the little pixels popped out so you saw if you had to walk over hills or valleys in order to reach your destination.” He explains, “This allows us to use our natural senses to perceive and interact with data.”

A surgeon, for instance, could work on a virtual brain physically, engaging in a fully tactile experience before performing a real-life operation. Artists and designers using physical materials such as clay could continually remold objects using their hands and store them in a computer as they work. Kasper Hornbæk, a project researcher from the University of Copenhagen, suggests that such a display could also allow you to hold the hand of your significant other, even if they are on another continent.

Esben Warming Pedersen, a member of the research team at the University of Copenhagen, discusses how these “deformable screens” differ from traditional touch screens. With no glass surface in front of the content, you can reach into the screen and touch the data. Pedersen explains how this is possible by first looking at the way normal glass touch screens work: “All that the iPad actually sees is the tip of your finger touching the glass display. So, when an iPad tries to find out where and how we touch it, you can think of the iPad actually as a coordinate system.” The deformable display is more complex than locating the coordinates of the touch of your fingertip; it uses depth data captured by a 3D depth camera. Pedersen is working to develop computer vision algorithms that make it possible to take this 3D data and represent it in a way so the computer can better understand and apply it in interactions.

One challenge Pedersen identifies is that we don’t yet know how to interact with these new screens. He comments on how we have an existing common vocabulary to interact with 2D displays, such as pinching our fingers to zoom out of a picture and sliding to switch to another picture, but if we then look at 3D gestures, or deformable gestures, it’s less apparent how to use these screens. Pedersen is working on user studies in search of an intuitive vocabulary of new gestures.

Combining multiple senses to experience a new reality

These new types of experiences aren’t limited to engaging one sense at a time. Adrian David Cheok, professor of pervasive computing at City University London, is working on new technologies to allow you to use all of your senses while communicating through the Internet. Cheok is building a Multi-Sensory Internet to transcend the current limitations of online communication. “Imagine you’re looking at your desktop, or your iPhone or laptop — everything is behind the glass, behind a window, you’re either touching glass or looking through glass. But in the real world, we can open up the glass, open the window and we can touch, we can taste, we can smell. So, what we need to do is we need to bring computers — the way we interact with computers — like we do in the physical world, with all of our five senses,” says Cheok.

Cheok’s sensorial inventions include an Electronic Taste Machine and a device called Scentee. The Electronic Taste Machine consists of a plexi-glass box you stick your tongue into to taste different flavors over the Internet. Using electrical and thermal stimulation, the interface temporarily tricks your tongue into experiencing sour, sweet, bitter, and salty tastes, depending on the frequency of the current passing though the electrodes. Scentee is a small device that attaches into the audio jack on your smartphone and releases an aroma using chemical cartridges when you receive a text message. Cheok explains: “For example, somebody may want to send you a sweet or a bitter message to tell you how they’re feeling. Smell and taste are strongly linked with emotions and memories, so a certain smell can affect your mood; that’s a totally new way of communicating.” He also references a commercial application he is working on with with Michelin-starred restaurant Mugaritz in San Sebastian, Spain, to allow you to smell the menu through your phone.

Cheok’s examples allow you to experience and share something that is virtual in reality; however, they are still simulations applied to mimic the real thing. Meta Cookie from the University of Tokyo combines scent and computer vision to also apply a simulation of reality, but in this case, technology is used to augment the taste of a real cookie. Meta Cookie merges an interactive olfactory display with plain edible cookies. A see-through head-mounted display allows the user to view various cookie selections in AR (with different cookie textures and colors digitally layered atop the plain cookie). Once you select the flavor of cookie you would like to eat, the air pump delivers the scent of the chosen cookie to your nose. This creates the effect that you are eating a flavored cookie, despite it being a plain one. If you don’t like the taste of the cookie you chose, you can transform it into another flavor and take another bite. In fact, you could have one ultimate cookie that embodies a different flavor with each bite. Your experience can be entirely customizable. We are beginning to see a shift in physical objects being imbued with digital properties, making them shape-shifting and adaptable to our personal needs, and in this case, our tastes.

Traditional screens as we know them are rapidly evolving, giving way to novel interactions and experiences. A new reality is coming that will forever change the way we engage with our surroundings. AR is about a new sensory awareness, deeper intelligence, and a heightened immersion in our physical world and with each other. 

Read more about the ideas and inventions this next major technological shift represents in my book, Augmented Human: How Technology is Shaping the New Reality.

This article also appeared in O’Reilly Radar.


PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS


Augmenting the Human Experience

Augmented reality (AR), wearable technology, and the Internet of Things (IoT) are all really about human augmentation. They are coming together to create a new reality that will forever change the way we experience the world. As these technologies emerge, we must place the focus on serving human needs.

The Internet of Things and Humans

Tim O’Reilly suggested the word “Humans” be appended to the term IoT. “This is a powerful way to think about the Internet of Things because it focuses the mind on the human experience of it, not just the things themselves,” wrote O’Reilly. “My point is that when you think about the Internet of Things, you should be thinking about the complex system of interaction between humans and things, and asking yourself how sensors, cloud intelligence, and actuators (which may be other humans for now) make it possible to do things differently.”

I share O’Reilly’s vision for the IoTH and propose we extend this perspective and apply it to the new AR that is emerging: let’s take the focus away from the technology and instead emphasize the human experience.

The definition of AR we have come to understand is a digital layer of information (including images, text, video, and 3D animations) viewed on top of the physical world through a smartphone, tablet, or eyewear. This definition of AR is expanding to include things like wearable technology, sensors, and artificial intelligence (AI) to interpret your surroundings and deliver a contextual experience that is meaningful and unique to you. It’s about a new sensory awareness, deeper intelligence, and heightened interaction with our world and each other.

Seeing the world in new ways

We are seeing AR pop up in all facets of life, from health, gaming, communication, and travel. Most AR applications today can be found in your pocket on mobile devices, enabling you to explore the physical world around you, untethered from your desktop computer. One of the earliest applications of AR being used to deliver a helpful experience was Word Lens. The application allows you to point your smartphone at printed text in a foreign language, including a road sign or a menu, and translate it on the fly into the language of your choice. Suddenly, you are more deeply immersed and engaged with your surroundings via a newfound contextual understanding assisted by technology.

Word Lens solves a human need. What if this same concept of using technology to augment your experience was extended to include other types of sensors, data, and networks? We are beginning to see examples of this, particularly in health care and wearable tech, with a higher goal of applying technology to help people live better lives. A perfect example of thought leaders exploring this new frontier is Rajiv Mongia, director of the Intel RealSense Interaction Design Group. Mongia and his team have developed a wearable prototype to help people with low or no vision gain a better sense of their surroundings. Combining a camera, computer vision, and sensors worn on the human body, the prototype is able to “see” objects within a few yards of you and tell you approximately where an object is located: high, low, left, or right, and whether the object is moving away or getting closer.

This is all communicated to you through vibration motors embedded into the wearable. The tactile feedback you experience is comparable to the vibration mode on your mobile phone, with the intensity corresponding to how close an object is to you. For example, if a wall or person is near you, the vibration is stronger, and if it’s farther away, it’s less intense. Mongia said that people who’ve tried the prototype say it has promise, that it augments their senses and helps them to “feel” the environment around them.

Advancing augmented reality for humanity

The Intel prototype is an example of empowering humans through technology. In developing the wearable system Mongia asked, “If we can bring vision to PCs and tablets, why not use that same technology to help people see?” This question exemplifies the spirit of the Internet of Things and Humans by giving people greater access to computer intelligence and emphasizing the human experience.

This greater goal will require seeing beyond just the technology and looking at systems of interaction to better enable and serve human needs. Tim O’Reilly has described Uber as an early IoT company. “Most people would say it is not; it’s just a pair of smartphone apps connecting a passenger and driver. But imagine for a moment the consumer end of the Uber app as it is today, and on the other end, a self-driving car. You would immediately see that as IoT.” Uber is a company that is built around location awareness. O’Reilly explained, “An Uber driver is an augmented taxi driver, with real-time location awareness. An Uber passenger is an augmented passenger, who knows when the cab will show up.”

While Uber strives to provide their users with an experience of convenience and visibility, there are other smartphone applications available today that use the reach of mobile and the power of social networking to truly help people. Be My Eyes, for example, is a mobile app that connects a person who is visually impaired with a sighted person to provide assistance. Using a live video connection, a sighted helper is able to see and describe vocally what a visually impaired person is facing. Since January 2015, more than 113,000 volunteers have signed up to help close to 10,000 visually impaired people around the world in 80 languages.

Be My Eyes is an early AR application in the same way O’Reilly described Uber as an early IoT company. Similar to Uber being more likely identified as IoT if a self-driving car was used, Be My Eyes would more likely be considered AR if a computer was using AI to identify what you were looking at. Apps like Be My Eyes are significant because they point the way to a new altruistic augmentation of reality building on the growth of the sharing economy, the power of our devices, and humans working together with computers to advance AR for humanity.

Augmented human

My book Augmented Human: How Technology is Shaping the New Reality, published by O’Reilly Media, expands upon the ideas and inventions this next major technological shift presents. By inspiring design for the best of humanity and the best of technology, Augmented Human is essential reading for designers, technologists, entrepreneurs, business leaders, and anyone who desires a peek at our virtual future.

This article also appeared in O’Reilly Radar.


PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS


The state of Augmented Reality

Unlike virtual reality (VR), augmented reality (AR) provides a gateway to a new dimension without the need to leave our physical world behind. We still see the real world around us in AR, whereas in VR, the real world is completely blocked out and replaced by a new world that immerses the user in a computer generated environment.

AR today

The most common definition of AR to date is a digital overlay on top of the real world, consisting of computer graphics, text, video, and audio, which is interactive in real time. This is experienced through a smartphone, tablet, computer, or AR eyewear equipped with software and a camera. Examples of AR today include the translation of signs or menus into the language of your choice, pointing at and identifying stars and planets in the night sky, and delving deeper into a museum exhibit with an interactive AR guide. AR presents the opportunity to better understand and experience our world in unprecedented ways.

AR is rapidly gaining momentum (and extreme amounts of funding) with great advances and opportunities in science, design, and business. It is not often that a whole new communications medium is introduced to the world. AR will have a profound effect on the way we live, work, and play. Now is the time to imagine, design, and build our virtual future.

Working with AR for a decade as a Ph.D. researcher, designer, and technology evangelist, I’ve watched AR evolve in regard to both technology (software and hardware) and experience design. An AR experience is commonly triggered by tracking something in the physical environment that activates the AR content. Images, GPS locations, and the human body and face are all things that can be tracked to initiate an AR experience, with more complex things like emotion and voice expanding this list. We are seeing a rise in AR hardware, with a particular emphasis on digital eyewear that includes gesture interaction from companies like Magic Leap and Microsoft with the recently announced HoloLens headset.

Designing AR for tomorrow

We are at a moment where we are also seeing a shift from AR as a layer on top of reality to a more immersive contextual experience that combines things like wearable computing, machine learning, and the Internet of Things (IoT). We are moving beyond an experience of holding up our smartphones and seeing three-dimensional animations like dinosaurs appear to examples of assistive technology that help the blind to see and navigate their surroundings. AR is life changing, and there is extreme potential here to design experiences that surpass gimmickry and have a positive effect on humanity.

MIT Media Lab founder Nicholas Negroponte said, “Computing is not about computers anymore. It is about living.” AR, too, is no longer about technology; it’s about defining how we want to live in the real world with this new technology and how we will design experiences that are meaningful and help advance humanity. There is an immediate need for storytellers and designers of all types to aid in defining AR’s trajectory. The technology exists, now it’s about authoring compelling content and applying meaningful experiences in this new medium.

It’s critical that we are asking these big questions now, at a time when AR is still largely undefined. I’m excited to be able to initiate and have these conversations across disciplines. I hope you’ll join me at O’Reilly’s Solid Conference 2015 in San Francisco June 23-25, where I’ll talk about the new opportunities this technological shift represents, highlighting and expanding on significant moments, inventions, and concepts. I’ll also be speaking at the annual Augmented World Expo 2015 in Silicon Valley June 8-10, where this year’s theme is “Superpowers to the People.”

Keeping it human-centered

For me, it’s about maintaining our humanness in a sea of limitless options within this new medium. We must think critically about how we will place human experience at the center. It’s not about being lost in our devices; it’s about technology receding into the background so that we can engage in human moments.

An article in Forbes by John Hagel and John Seely Brown looked at how IoT can help to enhance human relationships. Hagel and Brown described a scenario (that can be powered with current technology) of “data-augmented human assistance,” where a primary care physician wearing digital eyewear interacts with a patient to listen attentively and maintain eye contact while accessing and documenting relevant data. With the process of data capture and information transfer offloaded into the background, such devices can be applied to improve human relationships. “Practitioners can use technology to get technology out of the way — to move data and information flows to the side and enable better human interaction,” wrote Hagel and Brown, noting how such examples highlight a paradox that is inherent in the IoT: “although technology aims to weave data streams without human intervention, its deeper value comes from connecting people.”

This new wave of AR that combines IoT, big data, and wearable computing also has an incredible opportunity to connect people and create meaningful experiences, whether it’s across distances or being face to face with someone. The future of these new experiences is for us to imagine and build. Reality will be augmented in never-before-seen ways. What do you want it to look like and what role will you play in defining it?

This article also appeared on O’Reilly Radar.


PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS


AR and VR: Our Deep Wish to Make the Virtual Real

When we close our eyes at night we enter a virtual dream world. We can fly, see loved ones who've passed, and defy the limits of physical reality. Time, space, and our bodies are different in our dreams. Anything is possible and the rules of the physical waking world no longer apply. Our imagination reigns supreme here. In that moment, it is all real.

As humans, I believe we have a deep-seated desire to push the limits of physical reality to be able to inhabit these dreamscapes. Virtual Reality and Augmented Reality bring us closer to this dream.

We are explorers, we are inventors, we are storytellers, we are world builders. We have an innate curiosity to travel to the edge of our world, beyond our horizons, to seek and create new vistas. The power of the virtual fused with reality can help satisfy this wish. We can now step inside of our imagination and welcome others into the deep recesses of our dream worlds to share that reality.

May we imagine and build an awe-inspiring reality together.


PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS


Will AR Make Us Masters of the Information Age?

A new wave of AR applications aims to help people better understand and interact with the world.

Augmented reality (AR) is often confused with virtual reality (VR). Each allows us to see and interact with reality in very different ways. Both are evolving rapidly, but innovations in AR could help people become masters of the Information Age.

In VR, we are completely immersed in a computer generated environment, leaving the physical world behind. Common examples of VR include flight simulator training, real estate walk-throughs, exposure therapy for phobias and immersive, 3D video games.

In AR, we remain in our physical surroundings, seeing and interacting with the real world. You can use AR to instantly translate signs or menus into the language of your choice, point at and identify stars and planets in the night sky and delve deeper into a museum exhibit with an interactive AR guide.

Findings from a recent survey by the Pew Research Center indicate the vast majority of American Internet users believe the web “helps them learn new things, stay better informed on topics that matter to them, and increases their capacity to share ideas and creations with others.”

Internet-connected devices, especially smartphones, tablets or even a pair of smart eyeglasses, bring supplemental or reference information and multimedia to what we already see in the real world. AR companies like Metaio and Blippar are pioneering this for the mainstream, with applications ranging from education to retail.

AR Turns a Corner

AR is hitting a second wave that will profoundly change the way we experience reality. It is moving beyond a digital layering of information atop reality to combine with wearable technology, sensors, machine-learning, artificial intelligence, big data and the Internet of Things.

Contrary to pop culture perceptions, this new era of technological realities is not about becoming cyborg-like, supplanting human ability or replacing the human imagination. Instead, it’s about extending human capacity to design unprecedented experiences, according to many researchers who believe we have an entirely new medium on our hands.

Dr. Steve Mann, father of wearable computing, is considered to be the world’s first cyborg, inventing and wearing personal computers that have assisted his eyesight since the 1970s.

When asked at his 2013 keynote at Augmented World Expo in Silicon Valley what the killer app for AR is, Dr. Mann responded, “Reality.”

Mann explained that like any other technology, for it to succeed, it has to make our lives better.

In her 2009 TED talk, athlete Aimee Mullins explored the idea of prosthetic limbs making their wearers “super enabled” rather than disabled.

“It is no longer a conversation about overcoming deficiency,” she said. “It’s a conversation about augmentation. It’s a conversation about potential.”

AR threads together the real and digital worlds, but in order to become as common and natural to our daily lives as personal computing, we must understand and eventually trust its capabilities in order to experience the true benefits.

This next wave of AR presents shifts from static world applications to more fluid, contextually adaptive situations, where our devices will be highly cognizant of our preferences and movements through changing environments.

Museums, for instance, are beginning to mine specific information from visitors to help deliver more personalized experiences.

Sree Sreenivasan, chief digital officer at New York’s Metropolitan Museum of Art said, “I want to be able to know exactly what people have seen, what they love, what they want to see more of, and have the ability to serve it up to them instantly.”

For example, he said, “If someone loves a painting they’re looking at, they could get an instant coupon for the catalog, or a meal being sold at the cafeteria that’s based on it.”

AR Gets Personal

Future AR applications could change our relationship with personal devices.

Dr. Genevieve Bell, an anthropologist and Intel Fellow working at the intersection of cultural practice and technology adoption, describes a world where we have more reciprocal relationships with our devices in order for them to look after us, anticipating our needs and even doing things on our behalf almost like an invisible assistant.

Carolina Milanesi, vice president of research firm Gartner, states that by 2017, our smartphones will be more alert if not smarter than us, at least about many things.

“If there is heavy traffic, [your smartphone] will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague,” said Milanesi.

“The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data.”

Gartner’s research claims this will work with time-consuming menial tasks, such as calendaring or responding to mundane email messages. Once we become more confident in outsourcing to our smartphones, we will grow accustomed to apps and services taking control of other aspects of our lives.

But are we ready to entrust more of our lives to the new AR capabilities of intelligent devices?

The late John Rheinfrank described a framework in which users engage with an adaptive system to “build worlds that collaboratively participate in the [co-evolution] of our individual and collective abilities.”

He describes this as worlds that shift “to meet our abilities, to anticipate whatever they are or what we want them to be.”

In Spike Jonze’s film “Her,” Samantha, the intelligent operating system, connects to everything in her user Theodore’s world, even taking his thoughts out to the Internet to find comparisons. Some might argue this helped Theodore be more human, while others saw him losing his grip on reality.

In discussing the film, Intel Futurist Brian David Johnson described how for decades our relationship with technology has been based on an input-output model. Essentially, it has been a command-and-control relationship.

If commands aren’t communicated correctly or our device doesn’t understand our accent, that relationship screeches to a halt.

Today, our computing devices know us better than ever thanks to services that track our health, pay our bills and notify us there’s traffic ahead. But despite increasing intelligence from things such as AR applications, Johnson states that technology is still just a tool that must serve human values.

“We can have the ability to design our machines to take care of the people we love, allowing us to extend our humanity,” he said, referring to our ability to design “our better angels.”

This new wave of AR may make us even more reliant upon technology, but if designed right, with human interaction at the core, then AR can free us from screens, allowing us to focus deeper on relationships in the real world aspects that we love.

The question we need to ask as AR forges ahead, said Johnson, is “What are we optimizing for?”

The answer needs to be: to make people’s lives better.

This article also appeared on Intel IQ.


PURCHASE “AUGMENTED HUMAN” BOOK: ENGLISH: AMAZON (US), AMAZON (CA), AMAZON (UK), ITUNES, GOOGLE PLAY
KOREAN TRANSLATION: ACORN PRESS, CHINESE TRADITIONAL SCRIPT TRANSLATION: GOTOP, ALSO AVAILABLE IN INDIA: SHROFF PUBLISHERS


Augmented Reality M&A – Blippar kauft Layar

Das US- und UK Unternehmen Blippar hat in seinem Blog mitgeteilt, dass sie den niederländischen Wettbewerber Layar erworben haben. Laut CEO Amarish Mitra ist die Acquisition sehr „komplentär“ und es habe nur 3-4 Monate gedauert den M&A Prozess abzuschließen. Blippar hat damit laut TheNextWeb 50 Millionnen Nutzer. Quelle: Blippar Blog & TheNextWeb ***** XING-Gruppe “Augmented […]

Robert Scoble besucht Augmented Reality Start-up Meta

Robert Scoble ist inzwischen auch auf Meta aufmerksam geworden und hat den CEO, Meron Gribetz, und COO, Ben Sand, in Ihrem Headquater besucht – das Video sollte man unbedingt gesehen haben. Weitere Artikel zum Thema: Augmented Reality Brille “MetaPro” für 3.000 $ Spaceglasses – Augmented Reality Brille von Meta ***** XING-Gruppe “Augmented Reality & Marketing” […]