Digital Frontier: Where Brain-computer Interfaces & AR/VR Could One Day Meet

Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.

Be it Elon Musk’s latest company Neuralink—which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety today, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.

Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.

Noninvasive Baby Steps

BCI and AR/VR have already converged, albeit on a pretty small scale and to little appreciable effect so far in terms of the wider AR/VR usership. Early startups like Neurable are already staking their plot, basing their work on the portable and noninvasive method of electroencephalography (EEG), which reads voltage fluctuations in the brain from outside the skull.

Image courtesy Neurable

In terms of brain-computer interfaces, EEG is the oldest and one of the lowest ‘resolution’ methods of tuning into the brain’s constant flow of ‘action potentials’, the neuron-to-neuron pulses that form the foundation of thought, perception, action, and, well… everything.

According to Valve’s resident experimental psychologist Mike Ambinder, who held a talk on the state of BCIs and game design at GDC 2019 earlier this year, using EEG is tantamount to sitting outside of a football stadium and trying to figure out what’s happening on the field just by listening to the intensity of the crowd’s reaction; EEG can only reliably measure neural activity that occurs at the most upper layers of the brain.

SEE ALSO
Using Abstract VR Art for Neural Entrainment & Brain Research

Although EEG can provide a good starting point for some early data collection, Ambinder maintains, a trip underneath the skull is needed to order to derive deeper, more useful knowledge which in turn should allow for more immersive and adaptive games in the future.

There are some other non-invasive methods for viewing the brain, such as magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), however these haven’t (and likely won’t for some time) make their way out of hospitals and research facilities due to their massive size, power requirements, and price tags—precisely the problems implants plan to tackle.

Implants Galore

Neuralink is betting that its first generation device, the N1 sensor, will provide the sort of real world benefits its immediate target audience is looking for: a home-operated, low-latency, and high bandwidth method of basic input to computers and smartphones, enabling things like text input and simple operations—an ideal solution for those without use of their limbs.

The company’s furthest-reaching vision shared at its company’s unveiling last month however teases out a future of a mesh-like device called a ‘neural lace’ that could potentially have greater access to the brain by being injected into its capillaries.

This, the company hopes, could one day give users the ability to bypass our senses to virtually simulate vision, touch, taste, and effectively our entire perception. As the company’s founder and largest investor, Elon Musk, puts it, Neuralink’s mission is to eventually “achieve a symbiosis with artificial intelligence” through ‘full-brain’ BCI.

There’s of course no time frame available on Neuralink’s prophetic AI-merging neural lace; Musk himself says that while the N1 sensor should start in-human clinical studies in 2020, achieving full BCI will be a long, iterative process.

Adam Marblestone, a research scientist at Google DeepMind and PhD in biophysics from Harvard, isn’t so starry-eyed about the initial launch of Neuralink’s N1 tech though. Putting the company’s advances into perspective, Marblestone says in a recent tweet that although Neuralink is accelerating the current state of the technology, it’s not a magic bullet.

“They are climbing Everest with bigger team/better gear (engineering). What is really needed is a helicopter (science-intensive breakthrough),” Marblestone writes.

BCI might seem like new-fangled tech, but research has been in the works for longer than you might expect. In 1997, a bio engineering professor at the University of Utah named Dr. Richard Norman developed the ‘Utah Array’, an implant with 256 electrodes designed to rigidly attach to the brain. In fact, the Utah Array is still in production in various forms by Blackrock Microsystems, and has been instrumental in gathering neural recordings of action potentials over the past 20 years.

In contrast, Neuralink promises to deliver “as many as 3,072 electrodes per array distributed across 96 threads,” according to the company’s white paper, not to mention the added benefit of less-invasive, flexible threads designed to cause less inflammation than rigid electrodes. A detachable receiver would also power the array of smaller sensors, and transmit data to computers via Bluetooth.

Image courtesy Neuralink

There’s more than one way to skin a cat though, and the same is true for establishing what you might call ‘read & write’ capabilities of interacting with neurons, or the ability to both measure what’s happening in your brain and stimulate it too.

Besides the N1 sensor, an implant called ‘Neural Dust’ could also offer a window into the mind. The millimeter-scale implants are passive, wireless, and battery-less, and promise to provide what a UC Berkeley research group calls in a recent paper “high-fidelity transmission” of data obtained from muscles or neurons depending on where they’re implanted.

neural dust, Image courtesy University of California, Berkeley

Notably, a co-author on that particular paper is Dongjin Seo, Neuralink’s director of implant systems, so it’s possible we’ll see some further work in that area under Neuralink.

Another interesting method used in a recently published research paper by a group of scientists from Stanford University, Boulder Nonlinear Systems, and the University of Tokyo, deals with a method of optogenetic stimulation. Essentially, it’s a technique of firing light into the visual cortex of a brain, which was altered to include light-reactive proteins. The researchers were able to write neural activity into dozens of single neurons in a mouse and simultaneously read the impact of this stimulation across hundreds of nearby neurons. The end goal was to see whether they could inject specific images into the mouse’s visual field that weren’t really there. It’s rudimentary, but it works.

Admittedly, it’s not a case of using an implant per se, but it’s these early steps that will unlock behavioral patterns of neurons and allow scientists to more precisely manipulate the brain rather than just observing it. Once things get further along, BCI may well be the perfect complement to immersive AR and VR.

Read & Write: The Immersive Future

It’s 2030. All-in-one AR/VR glasses are a reality. They’re thin and light enough to be worn outdoors for when you need to navigate the city, or battle with your friends at the park in the wildest game of capture the flag you could ever imagine. When you want a more all-encompassing experience, you can simply switch the headset to VR mode and you’re instantly in a massively multiplayer virtual world.

According to Facebook Reality Labs’ chief scientist Michael Abrash, a device like this is on the horizon, and he expects it to come sometime in the next decade.

SEE ALSO
Oculus Chief Scientist Dives Deep Into the Near Future of AR & VR

It’s all great, certainly better than it used to be when the headsets first came out in 2016. Now AR/VR headsets have near perfect eye-tracking, on-board AI that does object recognition and convincingly speaks to you like a personal assistant. The headset has an eye-tracking based UI that sometimes feels like magic. You’re still mostly using hand gestures to do things in AR though, and you still rely on controllers in VR for the best at-home experience. Going back to a headsets from a decade earlier is pretty unthinkable by now.

Concept image from Oculus Connect 5 showing what a waveguide-based VR headset may look like in the near future, Image courtesy Facebook

Besides location-based AR/VR parks like The VOID, most consumers still don’t own haptic suits because they’re expensive, and although they can simulate hot and cold through thermoelectric coolers embedded in the fabric, it still only provides a few well-placed thumps and buzzes from the same sort of haptic tech you find in smartphones today—not really practical to wear, and not additive enough to buy for at-home play.

At the same time, two generations of smaller, higher-performing neuronal implants have made their way into production. Implants, once requiring major surgery, are now an outpatient procedure thanks to AI-assisted robotic surgery. These teams, which are backed by the big names in tech, are working to bring BCI to the consumer market, but none so far have been officially approved by the FDA for neurotypical users. The latest model, which is still technically for medical use, has gotten really good though, and offers a few benefits that clearly are targeted at enticing would-be users to shop around for doctors that are willing to fudge a diagnosis. Some countries have more lax rules, and the most adventurous with a few thousand to burn are the first to sign up.

With the implant, you can not only ‘type’ ten times faster and search the Web at the speed of thought, but you can listen to music without headphones, remotely voice chat with others without physically speaking, and navigate the UI with only your thoughts. Soon enough, special interest lobbies do their thing, Big Business does its thing, and somehow the first elective consumer BCI implant becomes legal, allowing for a host of others to slide in behind it.

This opens up a whole new world of game design, and menus basically become a thing of the past, as games become reactive not only to your abilities as a player, but to your unseen, unspoken reactions to the challenges laid out before you (e.g anger, delight, surprise, boredom, etc.) Game designers now have a wealth of information to sift through, and have to completely rethink the sort of systems they have to build in order to leverage this new data. It’s a paradigm shift that reminds the long-time AR/VR developers of ‘the good old days’, back when games didn’t need to rely on always-connected AI services for passable NPC interactions.

SEE ALSO
Valve Psychologist: Brain-computer Interfaces Are Coming & Could Be Built into VR Headsets

Now imagine a further future. The glasses are sitting on your trophy shelf of historical VR and AR headsets gathering dust. Your neural implant is no longer a series of chips pockmarking your skull. You had those painlessly removed right before your most recent procedure. A supple lattice coats the surface of your brain, and delivers strategic stimulus to a carefully planned network of neurons. It has access to a large portion of your brain. The glasses aren’t needed any more because digital imagery is injected directly into your visual cortex. You can feel wet grass on your feet, and smell pine needles from an unexplored forest in the front of you.

All of this is plausible given what we know today. As our understanding of the brain becomes more complete, and the benefits of having a neural implant begin to outweigh the apparent risks, brain-computer interfaces are poised to converge and then merge with AR/VR at some point. The timescale may be uncertain at this early date, and the risks not fully understood before many jump into the next phase of the digital revolution, but we can leave that can of worms for another article.

The post Digital Frontier: Where Brain-computer Interfaces & AR/VR Could One Day Meet appeared first on Road to VR.

XTAL Ultra High-End VR Headset Adds Neurable’s Emotion Analysis

XTAL Ultra High-End VR Headset Adds Neurable’s Emotion Analysis

A new partnership between Neurable and VRgineers adds the former’s “brain sensors” to the ultra high-end XTAL VR headset.

We tried out Neurable last year, a system which places EEG (Electroencephalography) sensors along the interior of a VR headset’s strap to gather data from contact with the skin around the brain. Combining that information in real-time with eye-tracking could allow the system to identify, measure and analyze the emotion and intent of the person wearing the headset. The XTAL VR headset from VRgineers includes eye tracking, so adding the EEG sensors and using Neurable’s analysis software might offer customers with very large budgets more capable analysis and training tools than consumer grade systems like Rift and Vive.

“We anticipate that this will be an enterprise-grade device, built for professional designers and engineers who require superior visual quality and highly accurate, reliable analytics,” Neurable CEO Ramses Alcaide explained in an email. “We’ve seen a lot of traction in three main areas: high-consequence simulation training for industrial applications, design feedback in AEC [Architecture, Engineering and Construction] use cases, customer research for retail.”


VRgineers claim,”Neurable’s unique ability to overcome the signal-to-noise issues of traditional non-invasive” brain-computer interfaces “enable them to deliver on the promise of truly useful BCI technology for enterprise and consumer applications.”

The expected use cases for the system make sense for the XTAL headset, which starts around $5,500 for its ultra-high end features which include a higher resolution panel, expanded field of view and integrated Leap Motion hand tracking. There’s no word yet on when the headset with Neurable integration will be available, or how much it will cost.

The military is investing nearly half a billion dollars in Microsoft-built HoloLens AR headsets to help soldiers become more effective while Walmart purchased 17,000 Oculus Go VR headsets this year to train the workforce at every store. If businesses are able to realize savings (or increased profits) by implementing VR training, then the high up-front cost of a headset like XTAL is likely still worth the investment. While we tried XTAL earlier this year and Neurable last year, and came away impressed by aspects of both demos, we haven’t tried a demo with both of these technologies implemented together.

“VR is a medium that relishes in data. Making sense of all of that data both from an input/output perspective is very important,” Alcaide explained. “Eye-tracking allows systems to parse a user’s virtual reality experience (i.e. when and where they are looking) while BCI provides data on the internal experience of the user (e.g. change in cognitive state state). With both data streams, we can extract powerful behavioral insights from virtual reality not available otherwise. It’s not enough to just see where a user is looking. We need to know what kind of changes are going on while they do so. Similarity, it’s not enough to just know general changes in state. Being able to programmatically associate the two data streams is how we bring value to these new types of applications.”

Tagged with: , ,

The post XTAL Ultra High-End VR Headset Adds Neurable’s Emotion Analysis appeared first on UploadVR.

University of Michigan Commercialisaton Fund Invests in Neurable

In 2015 a tech start-up called Neurable began its work in using EEG to let users control virtual reality (VR) using only the power of the brain. Now the Zell Lurie Commercialisation Fund (ZLCF) has announced that it has participated in a Series A fudning round for Neurable.

Neurable began as a project by two University of Michigan alumni, Ramses Alcaide and Michael Thompson, which uses a cutting-edge brain-computer interface to allow hands-free control of VR.

The Neurable technology uses machine learning and proprietary algorithms to measure brain activity in real time and translate it into control of the virtual characters and environment. Recognising that the technology could have a wide range of applications for various areas and industries, the company plans to use the new funding injection to fuel continued growth and development in the VR market.

“We are very excited to support Neurable in this next stage of growth,” said Scott Collins, VP of portfolio management at the ZLCF. “In the due diligence process, our team was impressed by Neurable’s breakthrough BCI technology, and we believe that the company is well-positioned to lead in the emerging neurotechnology market.”

The ZLCF is a student-led fund, one of five such funds at the Samuel Zell & Robert H. Lurie Institute for Entrepreneurial Studies. The purpose of the ZLCF is to let students have experience in venture capital by investing real money into real businesses both in and outside the university.

“This is an exciting investment for ZLCF,” said Michael Johnson, faculty director of ZLCF and entrepreneur-in-residence at the Zell Lurie Institute. “The Neurable team and technology came out of the outstanding innovation ecosystem at the University of Michigan. The ZLCF students are looking forward to working with the team as they build a great company.”

Neurable HTC Vive
Neurable’s Vive integration

For future coverage on Neurable and other VR start-up firms, keep checking back with VRFocus.

Thought Controlled VR is on the way With Neurable’s Brain-Computer Interfaces

There’s a massive amount of work going on in the field of virtual reality (VR) input methods, whether its Valve and the Knuckles controllers, omni-directional treadmills, data gloves by Manus VR and CaptoGlove, or gesture control by Leap Motion. All of this could be made somewhat redundant however with the advent of brain-computer interfaces from companies like Neurable. It recently took its latest model to SIGGRAPH 2017 where it debut Awakening, a VR videogame preview made in partnership with eStudiofuture – the company behind Fusion Wars.

For the first time Neurable has now unveiled its brain-computer interface (BCI) which replaces the HTC Vive’s normal head strap. The seven electrodes on the device read specific signals from a users brain activity known as event-related potential (ERP) signals, rather than using EEG, Spectrum reports. Neurable’s BCI tech is platform agnostic as well – the electrodes are off-the-shelf – its technology is all in the machine learning for the BCI software. So that means a company could simply make their own for Oculus Rift for example.

Neurable HTC Vive app

To promote the BCI, Neurable partnered with eStudiofuture to create Awakening. “Awakening is a futuristic story reminiscent of Stranger Things: you are a child held prisoner in a government science laboratory,” wrote Neurable VP Michael Thompson on a post on Medium. “You discover that experiments have endowed you with telekinetic powers. You must use those powers to escape your cell, defeat the robotic prison guards, and free yourself from the lab. The game allows you to manipulate objects and battle foes with your mind, and is played entirely without handheld controllers.”

Think this technology technology is far off? Well it looks like it’ll be getting a commercial release next year. “We’re targeting VR arcades in 2018,” Neurable CEO Ramses Alcaide told Spectrum. “What we’re showing off right now is a shortened version of the arcade game. We’re not really a game company or a hardware company… But this game is the first thing we’re looking to provide to VR arcades that are using our technology.”

As Neurable continues development, VRFocus will bring you further updates.

SIGGRAPH 2017: Neurable Lets You Control A Virtual World With Your Mind

SIGGRAPH 2017: Neurable Lets You Control A Virtual World With Your Mind

I’ve used my eyes to interact with a virtual world before, but startup Neurable just enhanced that experience by reading my thoughts too.

At SIGGRAPH this week the Boston-based startup is showing its modified HTC Vive which include EEG (Electroencephalography) sensors along the interior of the headstrap. This is combined with eye-tracking technology from German firm SMI, which may have just been acquired by Apple. The EEG sensors’ comb-like structure dug through my hair to subtly make contact against my scalp where they detected brain activity. It is definitely alarming to hear someone outside VR say my brain is looking good.

What followed was a brief training session where a group of objects floated in front of me — a train, ball and block among them. Each time one of them rotated I was told to focus on that object and think “grab” in my mind. I did so a number of times for several of the objects, all successful.

Afterward there was a test. I was told to just think of the object I wanted. I tried not to stare directly at the object I wanted but five out of five times the correct object was picked as I thought about it. A sixth time the wrong object was selected but it occurred as someone was talking to me and I was distracted. As I refocused, almost immediately the correct object moved toward me.

In the video above you can see each of the objects flash. Neurable CEO and President Ramses Alcaide says they are able to detect these flashes in my brain even though they register subconsciously. He said the eye tracking inside the headset wasn’t active during the training and test portion of the demonstration. It became active during the next portion of the demo meant to show the potential of the system in a game environment. Here’s how Neurable describes it:

Neurable is debuting Awakening, a VR game preview made in partnership with eStudiofuture, at SIGGRAPH 2017 in Los Angeles. Awakening is a futuristic story reminiscent of Stranger Things: you are a child held prisoner in a government science laboratory. You discover that experiments have endowed you with telekinetic powers. You must use those powers to escape your cell, defeat the robotic prison guards, and free yourself from the lab. The game allows you to manipulate objects and battle foes with your mind, and is played entirely without handheld controllers.

According to Neurable, this works using machine learning to interpret “your brain activity in real time to afford virtual powers of telekinesis.” The company offers an SDK so Unity developers can integrate the system into a game.

I was able to select a group of objects on the ground of my holding cell just by thinking about them and then use them to try and escape. I was offered some hints from outside VR to escape the room but the selection with my mind worked to grab the objects I wanted. As I moved into a lab, I looked around at the counter tops and thought about the objects to toss at a robot approaching me. One of them was a keyboard. As I thought the word “grab” it floated toward me. Object after object I tossed at the incoming robots until I progressed through to the end of the level.

“We have two modes. Pure EEG mode, which just determines the object you want and brings it to you directly, and we have a mode that is a hybrid BCI [brain-computer interface] mode, and in that mode we can use the eyes as a type of mouse where you can move your eyes near…the object you want to select,” said Alcaide. “From there your brain tells us which one you clicked on.”

In the video above you can see me sort of covering my face as a kind of surprised reaction each time the system correctly identified which object I wanted. I was frankly in shock — I really didn’t expect it to work as well as it did. Both this brain-computer interface and the earlier eye tracking demo I tried felt like true super powers.

According to Alcaide, Neurable raised around $2 million and has 13 employees.

“I think the future of mixed reality interactions is an ecosystem of solutions that incorporates voice, gesture control, eye tracking and the missing link to that entire puzzle which is brain-computer interfaces…we need some sort of system that prevents the action from happening until the user wants it to happen, and that’s where brain-computer interfaces come in,” said Alcaide. “In my opinion mixed reality cannot become a ubiquitous computing platform like the iPhone, or like the computer, until we have brain-computer interfaces as part of the solution.”

Tagged with:

SIGGRAPH 2017’s VR Village Hosts Diverse Range of VR and AR Tech

SIGGRAPH 2017 is due to be held in Los Angeles this Summer, and their VR Village is planning a great range of experiences for attendees. VR Village is one of the newer programs within the SIGGRAPH conference, and is hoping to wow audiences with unique applications of both virtual reality (VR) and augmented reality (AR).

Promising to showcase unique applications for both VR and AR in fields such as health, education, entertainment, design and gaming, VR Village exhibitors are hoping to impress new business partners as well as the public.SIGGRAPH 2017 logo

Diversity is the focus of 2017’s VR Village. Previous events looked at art and simulations, while 2017 looks towards the diversity of both creators from around the world and their projects, as Denise Quensel, 2017 VR Village Chair, explains; “We made a conscious effort for diversity — we tried to normalize our content to be as diverse as possible. We believe that diversity in content, and diversity of contributors, helps facilitate perspectives and opportunities that are of great benefit to attendees.

“The experiences that will be seen at this summer are not only outstanding examples of VR and AR, but can only be experienced in SIGGRAPH’s unique VR Village space.”

Interesting exhibits include Neurable: Brain-Computer Interfaces for Virtual and Augmented Reality, a promising leap into mind-controlled virtual reality, Out of Exile, a room-scale VR experience telling a story of LGBTQ discrimination, and HOLO-DOODLE, a “VR hangout” experience featuring naughty robots making its world premiere at SIGGRAPH 2017.

SIGGRAPH 2017 is taking place from July 30th to August 3rd 2017 in Los Angeles, where the event will showcase the latest in computer graphics technology and interactive experiences.

For more on the future of VR and SIGGRAPH 2017, keep up to date with VRFocus.

How to Control VR With the Power of Your Mind

Ramses Alcaide first developed the technology to use brain power to control videogames when he was a graduate student at the University of Michigan, now he is using that technology with start-up company Neurable.

MIT Technology Review have revealed that technology start-up company Neurable have developed their brain-sensing technology to the point where it is possible to determine what a player’s actions are in virtual reality (VR). It works by affixing dry electrodes that can record brain activity using EEG, something most people would be more familiar with in a hospital setting. The EEG signals are then interpreted by software in order to determine the correct action that should occur within the game.

The wireless electrodes are paired with a HTC Vive headset in its current version, which the company says is still in its early stages of development. Neurable are hoping to offer software tool kits for developers later on this year, and is optimistic that integrated headsets and electrodes will be developed not long after.

“You don’t really have to do anything,” say Alcaide, “It’s a subconscious response, which is really cool.”

It does take a few minutes of training to learn how to use the technology to achieve the desired response, though once that training is complete, it is applicable to every application. Once ‘trained’, it is possible to use pure brain power to do things like fire off spells in Skyrim.

The race to replace the familiar hand-held controllers with mental power goes all the way back to the Atari Mindlink – though that device actually read muscle movements, not brainwaves. There have also been expensive executive toys like the Force Trainer and similar products that use EEG produced by Neurosky. The disadvantage previously was the response lag, and the accuracy of the response.

Alcaide says that the technology has greatly improved and that one build of his technology got 85% accuracy when processed in real-time, and 99% accurate with 1 second delay.

VRFocus will keep you informed on Neurable and other VR tech startups.