Advanced Brain Monitoring is a 17-year old neurotechnology company that has been able to extract a lot of really useful information from EEG data. They’ve developed specific EEG Metrics for drowsiness, inducing flow states, engagement, stress, emotion, and empathy as well as biomarkers for different types of cognitive impairment. They’ve also developed a brain-computer interface that can be integrated with a VR headset, which has allowed them to do a couple of VR medical applications for PTSD exposure therapy as well as some experimental VR treatments for neurodegenerative diseases like Dementia.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to catch up with Advanced Brain Monitoring’s CEO and co-founder Chris Berka at the Experiential Technology conference where we talked about their different neurotechnology applications ranging from medical treatments, cognitive enhancement, accelerated learning, and performance training processes that guide athletes into optimal physiological flow states.
Advanced Brain Monitoring operates within the context of a medical application with an institutional review board and HIPAA-mandated privacy protocols, and so we also talked about the ethical implications of capturing and storing EEG data within a consumer context. She says, “That’s a huge challenge, and I don’t think that all of the relevant players and stakeholders have completely thought through that issue.”
They’ve developed a portfolio of biomarkers for neurodegenerative diseases including Alzheimer’s Disease, Huntington’s Disease, Mild Cognitive Impairment, Frontal Temporal Dementia, Lewy Body Dementia, Parkinison’s Disease. They’ve shown that it’s possible to detect a number of medical conditions based upon EEG data, which raises additional ethical questions for any future consumer-based VR company who records and stores EEG data. What is their disclosure or privacy-protection obligations if they are able to potentially detect a number of different medical conditions before you’re aware of them?
The convergence of EEG and VR is still in the DIY and experimental phases with custom integrated B2B solutions that are coming soon from companies like Mindmaze, but it’s still pretty early for consumer-based applications for EEG and VR. Any integration would have to require piecing together hardware options from companies like Advanced Brain Monitoring or the OpenBCI project, but then you’d also likely need to roll your own custom applications.
There are a lot of exciting biofeedback-driven mindfulness applications or accelerated learning and training applications that will start to become more available, but that some of the first EEG and VR integrations will likely be within the context of medical applications like neurorehabilitation, exposure therapy, and potential treatments for neurodegenerative diseases.
Researchers from Disney use a virtual reality headset and a real ball tracked in VR to understand the perceptual factors behind the seemingly simple act of catching a thrown object. The researchers say the work sets a foundation for more complex and meaningful dynamic interaction between users in VR and real-world objects.
Disney Research, an arm of The Walt Disney Company, does broad research that applies across the company’s media and entertainment efforts. With Disney known as being one of the pioneering companies to employ VR in the out-of-home space prior to the contemporary era of VR, it should come as no surprise that the company continues to explore this space.
In a new paper from Disney Research a VR headset is used in conjunction with a high-end tracking system to recreate the simple experience of catching a real thrown ball in VR. But why? With the experience being simulated in virtual reality, the researchers are free to easily modify it—in ways largely impossible in real life—as they see fit to study different aspects of the perception and action of catching a thrown object.
In this case, researchers Matthew K.X.J. Pan and Günter Niemeyer started with a visualization that showed just a ball flying through the air as you’d see in real life. Using VR they were able to add a virtual trajectory line to the ball’s path, and even remove the ball completely, to see what happened when the user had only the trajectory line to rely on. The video heading this article summarizes the tests they performed.
Perhaps most interesting is when they removed the ball and the trajectory line and only showed the user a target of where the ball would land. Doing this lead to a distinct difference in the user’s method of catching the ball.
Whereas the ball and trajectory visualization resulted in a natural catching motion, with only the target to rely on, the user switched to what the authors described as a more “robotic” motion to align their hand to where the ball would land, and then wait for it to arrive.
This suggests that our brains don’t simply calculate the endpoint of the trajectory and then move our hand to the point to catch it, like a computer might; instead it seems as if we continuously perceive the motion of the ball and synchronize our movements with it in some way. The authors elaborate:
20 [of 132] of these tosses were made with only the [visualization of the] virtual ball which most closely matches how balls are caught in the physical world. In this condition, 95% of balls were caught, indicating that our system allows users to catch reliably. Video and screen capture footage indicate that during the catch, the user visually focuses on the trajectory of the ball and does not keep their hands within viewing range until just before the catch. From this evidence, it can be inferred that prioprioception is used to position the hands using visual and depth cues of the ball.
Catching with the other visualizations did not seem to affect catching behaviors, except in the cases where the virtual ball was not rendered: the removal of the virtual ball from the VR scene seems to allow the catcher’s hands to reach the catch location much earlier prior to catching. The most apparent explanation for this phenomenon lies with the observation that the user is forced to alter catching strategy: the catcher has to rely on the target point/trajectory and so the motor task has changed from a catching task, which had required higher brain functions to estimate the trajectory, to a simpler, visually guided pointing task requiring no estimation at all.
The researchers say that their work aims to study how people can interact dynamically with real objects in VR, with this first study of a simple task laying the groundwork for more complex interactions:
…combining virtual and physical dynamic interactions to enrich virtual reality experiences is feasible. […] We believe this work provides valuable insight which informs how interactions with dynamic objects can be achieved while users are immersed in VR. As a result of these preliminary findings, we have discovered many more avenues for future work in dynamic object interactions in VR.
Which in research talk means, ‘you better bet we’ll be studying this further!’
Danfung Dennis of Condition One has an ambitious vision for the potential of virtual reality, and it’s one of the most radical ones that I’ve come across. He believes that VR can be used as a tool to cultivate compassion through having an embodied experience of witnessing suffering within VR. He says that the process of witnessing suffering can be used as a type of advanced Buddhist mind training to focus your attention, contemplate on your visceral reactions, and grow compassion through taking action. These brief VR experiences have the potential to impact day-to-day consumer decisions that people make, which can taken collectively could radically change the world.
LISTEN TO THE VOICES OF VR PODCAST
I know that this is possible is because I had one of the most powerful reactions I’ve ever had from watching Condition One’s Fierce Compassion / Operation Aspen VR experience. This live-action, cinéma vérité VR experience shows animal rights activists breaking into a factory farm to perform an open rescue and document the horrendous living conditions of Chicken in cages. It’s a guided tour of the many untreated heath ailments and barbaric conditions that are common in these types of industrial-scale factory farms. Having a direct embodied experience and bearing witness to this suffering had such a powerful impact on me that I vowed to never purchase anything other than cage-free Chicken eggs.
Condition One has also been producing guided meditations that are designed to be watched after experiencing some of their other animal rights experiences. Factory Farm is the most graphic and intense experience I’ve ever had in VR in that it shows the slaughter of two pigs as they go through a factory farm in Mexico. After witnessing this horrific scene in VR, I can why Paul McCartney once said, “If slaughterhouses had glass walls, everyone would be vegetarian.”
Condition One has also been tackling larger issues like global warming in VR> They produced the Melting Ice companion VR piece to An Inconvenient Sequel, which is a follow-up film to Al Gore’s An Inconvenient Truth. The An Inconvenient Sequel film lays out all of the latest science as told through the personal narrative of Al Gore, and the VR experience doesn’t attempt to delve into that much depth of the science. Dennis pulled back a lot of the narrative and story elements and just focused on trying to create an embodied experience of transporting you to locations of melting ice as large chunks fall off the side of cliffs, the cracking sound of steady dripping, but also entire rivers of melting glaciers cutting through sheets of ice.
One of the challenges with complex topics like global warming is that it’s very difficult to provide a singular embodied experience in VR that tells the entire story of the systemic causes of global warming. Standing on melting ice that’s disappearing at an accelerated pace due to global warming is as good of a experience as any, but it’s still difficult to tell that entire story within the confines of VR. So rather than convey the science of it all, Dennis decided to take a more contemplative and Zen approach of creating an sparse experience with limited narration in order to cultivate a direct experience with the sounds and visuals of a rapidly changing part of the planet.
Dennis believes that VR has the potential to be tool that can inspire humans to cultivate compassion by taking actions that relieve suffering. He’s interested in creating VR experiences that allow us to witness the suffering in the world, and that ultimately help us to expand our sphere of compassion beyond just our immediate friends, family, and pets to eventually include all sentient beings and the planet earth. These embodied virtual reality experiences stick with us in a deeper way, and become a part of our memories as we are making decisions of either continuing to participate in a system of violence or choosing more sustainable and ethical options that cultivate compassion and takes into consideration the impact on the next seven generations.
Virtualitics is new company started by NASA JPL and Caltech alum. offering a new platform to leverage the immersive properties of virtual reality in order to better aid human comprehension and use of ‘Big Data’.
You’ve likely heard the term ‘Big Data’. Despite the phrase being coined perhaps as early as the mid nineties, with information now collected on an ever increasing number of aspects in our daily lives and beyond you’ll likely have heard it mentioned a lot more in recent years. The term is, in a way, unhelpfully generic, but it’s become useful to act as a colloquialism for the vast, virtual data-scapes constructed from aspects which touch every part of our daily lives.
But what to do with all of this information? That’s a good question of course, and there are many around the world tasked with mining different data sources in order to extract previously obfuscated insights into complex problems and behaviours. But a new tool from Virtualitics – a new company formed by NASA JPL and Caltech alum. – aims to instead answer a different question: How do we visualise these vast, multi-dimensional data sets in a way which allows us mere humans to better extract those insights?
“Big Data is worthless if we cannot extract actionable knowledge from it,” said Michael Amori, CEO of Virtualitics. “Visualization can reveal the knowledge hidden in data, but traditional 2-D and 3-D data visualizations are inadequate for large and complex data sets. Our solution is to visualize as many as 10 dimensions in VR/AR all via a Shared Virtual Office, which allows even untrained users to spot patterns in data that can give companies a competitive edge.”
Virtualitics’ new virtual reality data visualisation platform claims a pedigree of over a decade of research from NASA (Jet Propulsion Laboratory) and Caltech (California Institute of Technology). It aims to utilise the naturalistic way in which people can interact with virtual objects in VR and to assist comprehension and perception by displaying that data in new and interesting ways.
According to professor George Djorgovski, renowned astrophysicist and founder of Virtualitics: “VR is intrinsically well-suited for human perception, intuition and pattern recognition, leading to insights that may be difficult or even impossible to gain through any traditional visualization technique. It is a natural environment for collaborative visual data exploration and data analytics that enables teams of users, who may be continents apart, to interact with the data and with each other in a shared virtual space. It may also be a natural environment where humans can interact with their artificial intelligence assistants.”
Virtualitics also provides a business oriented social space, which allows colleagues from around the world to meet inside the same virtual space and speculate and collaborate on these large, multi-dimensional data-sets in real time.
The new company has also just announced that it has closed a $3M seed round from angel investors in order to develop its technology further.
It’s certainly an interesting project, and another example of the potential of VR outside of the entertainment sphere. However there may be a risk such a project suffers being viewed as a case of The Emperors New Clothes by those now used to their efficient two dimentional, spreadsheet driven world. And in truth, the benefits these new forms of visualisations may offer won’t fully be understood until systems like this have been put to use in the real world for many years. However, as someone who has constantly noted the stark and astonishingly different perspectives VR can offer, I’m inclined to think that there may well be something in Virtualitics
Achieving a wide field of view in an AR headset is a challenge in itself, but so too is fixing the so-called vergence-accommodation conflict which presently plagues most VR and AR headsets, making them less comfortable and less in sync with the way our vision works in the real world. Researchers have set out to try to tackle both issues using varifocal membrane mirrors.
Researchers from UNC, MPI Informatik, NVIDIA, and MMCI have demonstrated a novel see-through near-eye display aimed at augmented reality which uses membrane mirrors to achieve varifocal optics which also manage to maintain a wide 100 degree field of view.
Vergence-Accommodation Conflict
In the real world, to focus on a near object, the lens of your eye bends to focus the light from that object onto your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.
Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate sharply inward to converge the image. You can see this too with our little finger trick as above; this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then look at those objects behind your finger, now you see a double finger image.
With precise enough instruments, you could use either vergence or accommodation to know exactly how far away an object is that a person is looking at (remember this, it’ll be important later). But the thing is, both accommodation and vergence happen together, automatically. And they don’t just happen at the same time; there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, any time you look at anything.
But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.
In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which makes up the virtual image, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accomodation—the bending of the lens in your eye—never changes).
That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy. But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.
Eliminating the Conflict
To make that happen, there needs to be a way to adjust the focal power of the lens in the headset. With traditional glass or plastic optics, the focal power is static and determined by the curvature of the lens. But if you could adjust the curvature of a lens on-demand, you could change the focal power whenever you wanted. That’s where membrane mirrors and eye-tracking come in.
The mirrors are able to set the accommodation depth of virtual objects anywhere between 20cm to (optical) infinity. The response time of the lenses between that minimum and maximum focal power is 300ms, according to the paper, with transitions between smaller focal powers happening faster.
But how to know how far to set the accommodation depth so that it’s perfectly in sync with the convergence depth? Thanks to integrated eye-tracking technology, the apparatus is able to rapidly measure the convergence of the user’s eyes, the angle of which can easily be used to determine the depth of anything the user is looking at. With that data in hand, setting the accommodation depth to match is as easy as adjusting the focal power of the lens.
Those of you following along closely will probably see a potential limitation to this approach—the accommodation depth can only be set for one virtual object at a time. The researchers thought about this too, and proposed a solution to be tested at a later date:
Our display is capable of displaying only a single depth at a time, which leads to incorrect views for virtual content [spanning] different depths. A simple solution to this would be to apply a defocus kernel approximating the eye’s point spread function to the virtual image according to the depth of the virtual objects. Due to the potential of rendered blur not being equivalent to optical blur, we have not implemented this solution. Future work must evaluate the effectiveness of using rendered blur in place of optical blur.
Other limitations of the system (and possible solutions) are detailed in section 6 of the paper, including varifocal response time, form-factor, latency, consistency of focal profiles, and more.
Retaining a Wide Field of View & High Resolution
But this isn’t the first time someone has demonstrated a varifocal display system. The researchers identified several other varifocal display approaches, including free-form optics, light field displays, pinlight displays, pinhole displays, multi-focal plane display, and more. But, according to the paper’s authors, all of these approaches make significant tradeoffs in other important areas like field of view and resolution.
And that’s what makes this novel membrane mirror approach so interesting—it not only tackles the vergence-accommodation conflict, but does so in a way that allows a wide 100 degree field of view and retains a relatively high resolution, according to the authors. You’ll notice in the chart above, that, of the different varifocal approaches the researchers identified, they show that any large-FOV approach results in a low angular resolution (and vice-versa), except for their solution.
– – — – –
This technology is obviously at a very preliminary stage, but its use as a solution for several key challenges facing AR and VR headset designs has been effectively demonstrated. And with that, I’ll leave the parting thoughts to the paper’s authors (D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs.):
Despite few limitations of our system, we believe that providing correct focus cues as well as wide field of view are most crucial features of head-mounted displays that try to provide seamless integration of the virtual and the real world. Our screen not only provides basis for new, improved designs, but it can be directly used in perceptual experiments that aim at determining requirements for future systems. We, therefore, argue that our work will significantly facilitate the development of augmented reality technology and contribute to our understanding of how it influences user experience.
Hassan Karaouni is one of the 11 winners of an Oculus Launch Pad scholarship for his project My: home, which allows people to share 360 videos of locations that are meaningful to them. In my Voices of VR episode about Google Earth VR, I talked about how the principle of embodied cognition explains how our memories are tied to geographic locations. But right now Google Earth’s resolution at the human scale is really uncanny, and you can’t go inside.
That’s where Hassan’s project tries to fill the gaps by enabling people to share 360 videos of places that are meaningful to them, while being able to navigate between them using a model of the Earth. This is quite an intimate and effective way to get to know someone, but it’s also the type of content that’s going to be a lot more meaningful to the creators in 10-20 years from now because it is so effective at evoking memories.
LISTEN TO THE VOICES OF VR PODCAST
Hassan Karaouni is also one of the co-founders of the Rabbit Hole VR student group at Stanford. They’ve held a number of events, and have deep philosophical discussions about how VR can impact human life and the human condition. So Hassan and I go down the rabbit hole in this episode by exploring the deeper philosophical implications of simulation theory and our relationship to fate and free will. In the wrap-up, I talk how a recent sci-fi film helped me gain some more insights into the differences between chronos and kairos time and how a VR experience is a non-linear portal that tips the balance towards creating more kairos time type of experiences.
Researchers at Stanford think that having a third arm in VR could make you a more efficient (virtual) human. So they’ve set out to learn what they can about the most effective means of controlling an extra limb in VR.
Thanks to high quality VR motion controllers, computer users are beginning to reach into the digital world in an entirely new and tangible way. But this is virtual reality after all, and we can do whatever we want, so why be restricted to a mere two arms? Researchers at Stanford’s Virtual Human Interaction Lab have finally said “enough is enough,” and have begun studying which control schemes are most effective for use with a virtual third arm.
Having only ever lived with two arms, a virtual third arm would need to be easy to learn to control to be of any use. In a paper published in the journal Presence: Teleoperators and Virtual Environments, Bireswar Laha, Jeremy N. Bailenson, Andrea Stevenson Won, and Jakki O. Bailey defined three methods of controlling a third arm that extends outward from the virtual user’s chest.
The first method controls the arm via the user’s head. Turning and tilting the head causes the arm move in a relatively intuitive way. The second method, which the researchers call ‘Bimanual’, uses the horizontal rotation of one controller combined with the vertical rotation of a second controller to act as inputs for the arm. And the third method, called ‘Unimanual’ uses the horizontal and vertical rotation of just a single controller to drive the third arm.
The paper, called Evaluating Control Schemes for the Third Arm of an Avatar, details an experiment the researchers designed to test the efficacy of each control scheme in virtual reality. The task set forth is for the user to tap a randomly changing white block among a grid of blocks, with one grid for the left arm, another for the right arm, and a third set that’s further away and only reachable by the third arm. The paper’s abstract surmises:
Recent research on immersive virtual environments has shown that users can not only inhabit and identify with novel avatars with novel body extensions, but also learn to control novel appendages in ways beneficial to the task at hand. But how different control schemas might affect task performance and body ownership with novel avatar appendages has yet to be explored. In this article, we discuss the design of control schemas based on the theory and practice of 3D interactions applied to novel avatar bodies. Using a within-subjects design, we compare the effects of controlling a third arm with three different control schemas (bimanual, unimanual, and head-control) on task performance, simulator sickness, presence, and user preference. Both the unimanual and the head-control were significantly faster, elicited significantly higher body ownership, and were preferred over the bimanual control schema. Participants felt that the bimanual control was significantly more difficult than the unimanual control, and elicited less appendage agency than the head-control. There were no differences in reported simulator sickness. We discuss the implications of these results for interface design.
Ultimately, the idea of a third arm in VR is something of a metaphor. When you break it down, the study is really about VR input schemes which use traditionally non-input motions as input. As abstract as it is, a third arm is more immediately understandable input concept, because we already have arms and know how they work and what they’re good at. But this research is easily applied to other input modalities, like the commonly seen laser-pointer interface and gaze-based interfaces that are already employed in the VR space.
On today’s episode, I talk with Dr. Benjamin Lok from the University of Florida about how they’re using Virtual Humans as patients to train medical students. He talks about the key components for creating a plausible training scenario which include both accurate medical symptom information, but also more importantly a robust personality and specific worldview. Humans hardly ever just transmit factual data, and so whether the patient says too much or not enough, the students have to be able to navigate a wide range of personalities in order to get the required information to help diagnose and treat the patient.
LISTEN TO THE VOICES OF VR PODCAST
Virtual humans help to embody symptoms that a human actor can’t display, assist in going through an extended interactive question and answer path, or they’re used within collaborative training scenarios where it becomes difficult to get all of the required expert collaborators into the same location at the same time.
Dr. Lok makes the point that creating virtual humans requires a vast amount of knowledge about the human condition and that it’s really a huge cross disciplinary effort, but one that is one of the most important fields of study since it has so much to teach us about what it means to be human.
Stephanie Hurlburt is a low-level graphics engineer who has previously worked on the Unity Game Engine, Oculus Medium, and Intel’s Project Alloy, and now she’s creating on a texture compression product called Basis at her company Binomial. I had a chance to catch up with her at PAX West, and we take a bit of a deep dive into the graphics pipeline and some of her VR optimization tools and processes. We also talk about how to determine whether an experience is CPU-bound or GPU-bound, an open source game engine being built by Intel, the future of real-time ray tracing in games like Tomorrow Children & Dreams, and why she sees texture compression as a bottleneck in the graphics pipeline worth persuing for the future of wireless streaming in VR.
LISTEN TO THE VOICES OF VR PODCAST
Here’s a recent talk that Stephanie has given on texture compression and the future of VR
Last week Oculus Chief Scientist Michael Abrash stood on the stage at Oculus Connect 3 and talked about where he thought VR will be in five years’ time. He made some bold predictions that are going to take a lot of work and resources to achieve.
That’s why Oculus Research is launching a $250,000 grant initiative looking to advance work in a few key areas. This isn’t like the company’s large investments into content. This money will be split between a maximum of three research proposals based in vision and cognitive science. Research will be carried out between the next one and two years, and submissions should be from academic institutions.
Oculus is looking to make progress in very specific fields with this money, and the findings from successful applicants will be released to the public. The company has outlined what it’s hoping to find in a Call for Research.
The first area the company is looking at is ‘Self-motion in VR’. That doesn’t mean new locomotion techniques, but instead the ways that information sources like a wider field of view affect user’s behavior in “three-dimensional scenes”. “More specifically,” the call notes, “we are interested in how these cues to depth may change the way the visual system uses other sources of shape information…to recover the three-dimensional layout of the virtual or augmented scene”.
Oculus is also looking for a team to develop a way to generate a ‘dataset of binocular eye movements’ within the real world. You might remember Abrash speaking about the complexity of delivering perfect eye-tracking in his talk last week, and this might be related. “While eye movements generated in laboratory settings are well studied,” the call reads, “much less data is available about eye movements in the natural world or in virtual reality.”
‘Multisensory studies’ is next. Oculus wants to understand why VR and AR experiences that cover multiple senses are so much more compelling than those that address a single one. “We would like to determine what features and characteristics make multisensory information so valuable in AR/VR,” the call notes.
Finally, Oculus is interested in “biological motion related to social signaling”. Again, this relates to another part of Abrash’s talk, this time concerning virtual humans. The company wants to establish the gestures, facial expressions, eye movements and other factors that are essential to communicating our intended messages beyond mere words. With a clear understanding of this, we could see more life-like avatars.
Submissions need to be emailed to callforresearch@oculus.com and will be a maximum of five pages in length, outlining methods, budget, and estimated timelines. Reviews for proposals will begin on October 25th and successful applicants will be contacted on November 1st.