Meta Offered a Glimpse into the XR R&D That’s Costing It Billions

During the Connect 2021 conference last week, Meta Reality Labs’ Chief Scientist, Michael Abrash, offered a high-level overview of some of the R&D that’s behind the company’s multi-billion dollar push into XR and the metaverse.

Michael Abrash leads the team at Meta Reality Labs Research which has been tasked with researching technologies that the company believes could be foundational to XR and the metaverse decades in the future. At Connect 2021, Abrash shared some of the group’s very latest work.

Full-body Codec Avatars

Meta’s Codec Avatar project aims to achieve a system capable of capturing and representing photorealistic avatars for use in XR. A major challenge beyond simply ‘scanning’ a person’s body is getting it to then move in realistic ways—not to mention making the whole system capable of running in real-time so that the avatar can be used in an interactive context.

The company has shown off its Codec Avatar work on various occasions, each time showing improvements. Initially the project started off simply with high quality heads, but it has since evolved to full-body avatars.

The video above is a demo representing the group’s latest work on full-body Codec Avatars, which researcher Yaser Sheikh explains now supports more complex eye movement, facial expressions, and hand and body gestures which involve self-contact. It isn’t stated outright, but the video also shows a viewer watching the presentation in virtual reality, implying that this is all happening in real-time.

With the possibility of such realistic avatars in the future, Abrash acknowledged that it’s important to think about security of one’s identity. To that end he says the company is “thinking about how we can secure your avatar, whether by tying it to an authenticated account, or by verifying identity in some other way.”

Photorealistic Hair and Skin Rendering

While Meta’s Codec Avatars are already looking pretty darn convincing, the research group believes the ultimate destination for the technology is to achieve photorealism.

Above Abrash showed off what he says is the research group’s latest work in photorealistic hair and skin rendering, and lighting thereof. It wasn’t claimed that this was happening in real-time (and we doubt it is), but it’s a look at the bar the team is aiming for down the road with the Codec Avatar tech.

Clothing Simulation

Along with a high quality representation of your body, Meta expects clothing with continue to be an important way that people want to express themselves in the metaverse. To that end, they think that making clothes act realistically will be an important part of that experience. Above the company shows off its work in clothing simulation and hands-on interaction.

High-fidelity Real-time Virtual Spaces

While XR can easily whisk us away to other realities, teleporting friends virtually to your actual living space would be great too. Taken to the extreme, that means having a full-blown recreation of your actual home and everything in it, which can run in real-time.

Well… Meta did just that. They built a mock apartment complete with a perfect replica of all the objects in it. Doing so makes it possible for a user to move around the real space and interact with it like normal while keeping the virtual version in sync.

So if you happen to have virtual guests over, they could actually see you moving around your real world space and interacting with anything inside of it in an incredibly natural way. Similarly, when using AR glasses, having a map of the space with this level of fidelity could make AR experiences and interactions much more compelling.

Presently this seems to serve the purpose of building out a ‘best case’ scenario of a mapped real-world environment for the company to experiment with. If Meta finds that having this kind of perfectly synchronized real and virtual space becomes important to valuable use-cases with the technology, it may then explore ways to make it easy for users to capture their own spaces with similar precision.

Continued on Page 2 »

The post Meta Offered a Glimpse into the XR R&D That’s Costing It Billions appeared first on Road to VR.

Facebook Researchers Reveal Methods for Design & Fabrication of Compact Holographic Lenses

Researchers from Facebook Reality Labs have shared new methods for the design & fabrication of compact holographic lenses for use in XR headsets.

The lenses used in most of today’s XR devices are typical refractive lenses which can be fairly bulky, especially as they are optimized for certain optical characteristics. Fresnel (ridged) lenses are frequently used in XR headsets to improve optical performance without adding too much bulk.

In theory, holographic lenses are a promising approach for XR optics thanks to their ability to perform the same (or even more advanced) functions of a traditional lens, but in the space of a wafer-thin film. However, designing and fabricating holographic lenses with high optical performance is far more difficult today than it is with typical refractive optics.

In an effort to move us one step closer to the practical use of holographic lenses in XR devices, Facebook Reality Labs researchers Changwon Jang, Olivier Mercier, Kiseung Bang, Gang Li, Yang Zhao, and Douglas Lanman have detailed new methods for creating them. This could go a long way toward making it possible to build, at scale, the kind of compact XR glasses Facebook recently demonstrated.

In a paper published in the peer-reviewed journal ACM Transactions on Graphics (Vol. 39, No. 6, Article 184) in December, titled Design and Fabrication of Freeform Holographic Optical Elements, the researchers write, “we present a pipeline for the design and fabrication of freeform [Holographic Optical Elements (HOEs)] that can prescribe volume gratings with complex phase profiles and high selectivity. Our approach reduces image aberrations, optimizes the diffraction efficiency at a desired wavelength and angle, and compensates for the shrinkage of the material during HOE fabrication, all of which are highly beneficial for VR/AR applications. We also demonstrate the first full-color caustic HOE as an example of a complex, but smoothly-varying, volume grating.”

Specifically the paper covers optimization methods for establishing a theoretical holographic lens design, and two approaches for actually manufacturing it.

One uses a pair of freeform refractive optics to create the target hologram; the paper proposes the method for creating the refractive optics such that they accurately form the target hologram within the holographic film. The other involves a holographic printer used to create the target hologram from tiled holographic patches; again the paper proposes the method for optimizing the process to most accurately recreate the target hologram within the holographic film with this specific approach, which they say is a completely different challenge from the first method.

While the paper didn’t explore quite this far, the authors say that future research could attempt to apply these same methods to curved, rather than flat, surfaces.

“For some VR/AR applications, it could be beneficial to create HOEs with physically curved form-factors, for example, for HOEs laminated on curved windshields or glasses. We expect our fabrication framework to expand well to such cases, since neither the printer or the [refractive lens] approaches require the HOE to be flat, and the optimization method of Algorithm 1 could be adapted to intersect rays with a curved surface […],” the researchers write. “Optimizing the shape of the HOE as part of our method would provide us with more degrees of freedom and would broaden applications, but we leave this as future work.”

The post Facebook Researchers Reveal Methods for Design & Fabrication of Compact Holographic Lenses appeared first on Road to VR.

Google ARCore Depth API Now Available, Letting Devs Make AR More Realistic

ARCore, Google’s developer platform for building augmented reality experiences for mobile devices, just got an update that brings the company’s previously announced Depth API to Android and Unity developers. Depth API not only lets mobile devices create depth maps using a single RGB camera, but also aims to make the AR experience more natural, as virtual imagery is more realistically placed in the world.

Update (June 25th, 2020): Google today announced it’s making its Depth API for ARCore available to developers. A few studios have already integrated Depth API into their apps to create more convincing occlusion, such as Illumix’s Five Nights at Freddy’s AR: Special Delivery game, which lets enemies hide behind your real-world objects for more startling jump scares.

ARCore 1.18 for Android and Unity, including AR Foundation, is rolling out to what Google calls “hundreds of millions of compatible Android devices,” although there’s no clear list of which devices are supported just yet.

Original Article (December 9th, 2019): Shahram Izadi, Director of Research and Engineering at Google, says in a blog post the new Depth API now enables occlusion for mobile AR applications, and also the chance of creating more realistic physics and surface interactions.

To demonstrate, Google created a number of demos to shows off the full set of capabilities the new Depth API brings to ARCore. Keep an eye on the virtual objects as they’re accurately occluded by physical barriers.

“The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera,” Izadi says. “The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.”

Full-fledged AR headsets typically use multiple depth sensors to create depth maps like this, which Google says was created on device with a single sensors. Here, red indicates areas that closer, while blue is for farther areas:

 

“One important application for depth is occlusion: the ability for digital objects to accurately appear in front of or behind real world objects,” Izadi explains. “Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today.”

Additionally, Izadi says Depth API does’t require specialized cameras and sensors, and that with the addition of time-of-flight (ToF) sensors to future mobile devices, ARCore’s depth mapping capabilities could eventually allow for virtual objects to occlude behind moving, physical objects.

The new Depth API follows Google’s release of its ‘Environmental HDR’ tool back at Google I/O in May, which brought more realistic lighting to AR objects and scenes, something which aims at enhancing immersion with more realistic reflections, shadows, and lighting.

Update (12:10): In a previous version of this article, it was claimed that Google was releasing Depth API today, however the company is only now putting out a form for developers interested in using the tool. You can sign up here.

The post Google ARCore Depth API Now Available, Letting Devs Make AR More Realistic appeared first on Road to VR.

Researchers Transform Real Soccer Matches into Tabletop AR Reconstructions

The FIFA World Cup finals are nearly here, and while we still live in a time where most everyone interested in the France vs. Croatia match will be glued to a TV set, researchers from the University of Washington, Facebook, and Google just gave us a prescient look at what an AR soccer match could look like in the near future.

The researchers have devised an end-to-end system to create a moving 3D reconstruction of a real soccer match, which they say in their paper can be viewed with a 3D viewer or an AR device such as a HoloLens. They did it by training their convolutional neural network (CNN) with hours of virtual player data captured from EA’s FIFA video games, which essentially gave the team the data needed to ingest a single monocular YouTube video and output it into a sort of 2D/3D hybrid.

Researchers involved in the project are University of Washington’s Konstantinos Rematas, Ira Kemelmacher-Shlizerman (also Facebook), Brian Curless, and Steve Seitz (also Google).

There are a few caveats currently that should temper your expectations of seeing a ‘perfect’ 3D reconstruction that you could watch from any angle: currently players are still projected as 2D textures, positioning of the individual players is still a bit jittery, and the ball isn’t tracked either—an indispensable part of the equation that’s coming in the future, the team says. Also, because it’s based on single monocular shots, occlusion is an issue too, as players’ movements are hidden from the camera and their texture disappears from view.

The implication of watching a (nearly) live soccer match in AR is still pretty astounding though, especially on your living room coffee table.

Image courtesy University of Washington, Facebook, Google

“There are numerous challenges in monocular reconstruction of a soccer game. We must estimate the camera pose relative to the field, detect and track each of the players, re-construct their body shapes and poses, and render the combined reconstruction,” the team writes.

Viewing live matches won’t be possible for a while either, the team says. To watch a full match on an AR device such as a HoloLens, the system still requires a real-time reconstruction method and a method for efficient data compression and streaming to deliver it to your AR headset.

Because the system relies on standard footage, it represents a sort of low-hanging fruit of what’s possible now with current capture tech. Even though it’s based on 4K video, there are still unwanted artifacts such as chromatic aberration, motion blur, and compression artifacts.

Ideally, a stadium would be outfitted with multiple cameras with the specific purpose of AR capture for the best possible outcome—not the overall goal of the paper, but it’s a definite building block on the way to live 3D sports in AR.

The team’s research will be presented at the Computer Vision and Pattern Recognition conference, which is taking place June 18-22 in in Salt Lake City, Utah.

The post Researchers Transform Real Soccer Matches into Tabletop AR Reconstructions appeared first on Road to VR.

Oculus Research Becomes ‘Facebook Reality Labs’, Creating “further and faster” Leaps in AR/VR

Oculus today announced it’s rebranding Oculus Research, the company’s R&D lab, to the newly created Facebook Reality Labs (FRL). The shift, the company says, better addresses the increasingly important role of research and development in AR/VR while emphasizing collaboration with Facebook’s other skunkworks, something Oculus Chief Scientist Michael Abrash says is allowing for “further and faster” development of leading-edge AR/VR tech.

The lab’s focus on the future hasn’t changed, the company says, although the new name reflects a new role the R&D group plays “not only at Oculus, but also across Facebook’s AR/VR organization, which includes Building 8, Camera, and Social VR,” an Oculus spokesperson told Road to VR.

Facebook’s Building 8 specializes in researching and productizing advances in AR, VR, AI and more.

The company announced the change via a Facebook post by Oculus Chief Scientist Michael Abrash.

Image courtesy Oculus

Abrash famously offered up some bold predictions at Oculus Connect 3 back in 2016, which outlined a pretty specific direction for AR/VR on its five-year march forward, including the prediction that VR headsets would double the number of current pixels per degree to 30, push the resolution to around 4,000 × 4,000 pixels per display, and widen the field of view to 140 degrees. Both Oculus Rift and HTC Vive currently offer 15 pixels per degree, a resolution of 1,080 × 1,200 pixels per display, and a field of view of around 110 degrees.

Abrash presciently announced then that the current tech’s fixed depth of focus would also likely become variable within 5 years. Many of these technologies, including varifocal displays and 140 degree field of view, are incorporated in Oculus’ Half Dome prototype, which was revealed last week at Facebook’s F8 developer conference.

Image courtesy Facebook

“We are just a year and a half along now [after Connect 3], and I would say those predictions are holding up well,” Abrash says. “In fact, the truth is that I probably undershot, thanks to Facebook’s growing investment in FRL, which allows us to push the boundaries of what it takes to build great experiences further and faster. We are helping Oculus and all of Facebook create trailblazing AR and VR experiences, from what’s most affordable to leading edge.”

SEE ALSO
Oculus on Half Dome Prototype: 'don't expect to see everything in a product anytime soon'

Abrash says FRL “brings together a world-class R&D team of researchers, developers, and engineers with the shared goal of developing AR and VR across the spectrum,” and that while there are plenty of issues with VR and AR at present, “they’re all solvable, and they are going to get solved.”

With increasing investment, the company will no doubt continue its mission to push forward a number of related fields including optics, displays, audio sensing, computer vision, scene reconstruction, graphics animation, UX, haptics, machine learning, software and hardware engineering, social interactions, material sciences and perceptual psychology—all of it crucial to the upcoming generation of future VR/AR devices.

The post Oculus Research Becomes ‘Facebook Reality Labs’, Creating “further and faster” Leaps in AR/VR appeared first on Road to VR.

Researchers Electrically Stimulate Muscles in Haptic Designed for Hands-free AR Input

Researchers at The Human Computer Interaction Lab at Hasso-Plattner-Institut in Potsdam, Germany, published a video recently showing a novel solution to the problem of wearable haptics for augmented reality. Using a lightweight, mobile electrical muscle stimulation (EMS) device that provides low-voltage to arm muscles, the idea is to let AR headset-users stay hands-free, but also be able to experience force-feedback when interacting with virtual objects, and feel extra forces when touching physical objects in their environment too.

Using a HoloLens headset, researchers show their proposed solution in action, which is made up of a backpack, a laptop computer running Unity, a battery-powered EMS machine, electrode pads, and visual markers to better track hand gestures. The researchers say their system “adds physical forces while keeping the users’ hands free to interact unencumbered.”

image courtesy Hasso-Plattner-Institut

Both HoloLens and the upcoming Magic Leap One include a physical controller; HoloLens has a simple ‘clicker’ and ML One has a 6DoF controller. While both systems admittedly incorporate gestural recognition, there’s still no established way for AR headset users to ‘feel’ the world around them.

According to the paper, which is being presented at this year’s ACM CHI Conference in Montréal, the EMS-based system actuates the user’s wrists, biceps, triceps and shoulder muscles with a low-voltage to simulate a sort of ‘virtual pressure’. This perceived pressure can be activated when you interact with virtual objects such as buttons, and even physical objects like real-world dials and levels to create an extra sense of force on the user’s arms.


There are some trade-offs when using this sort of system though, making it somewhat less practical for long-term use as it’s configured now. Two of the biggest drawbacks: it requires precise electrode placement and per-user calibration before each use, and it can also cause muscle fatigue, which would render it less useful and probably less comfortable.

But maybe a little muscle stimulation can go a long way. The paper discusses using EMS sparingly, playing on the user’s keen sense for plausibility while in a physical (and not virtual) environment.

“In the case of [augmented reality], we observed users remarking how they enjoyed nuanced aspects of the EMS-enabled physics, for instance: “I can feel the couch is harder to move when it is stopped [due to our EMS-based static friction]”. As a recommendation for UX designers working in MR, we suggest aligning the “haptic-physics” with the expected physics as much as possible rather than resorting to exaggerations.

It’s an interesting step that could prove effective in a multi-pronged approach to adding haptics to AR wearables, the users of which would want to stay hands-free when going about their daily lives. Actuator-based gloves and vests have been a low-hanging fruit so far, and are quickly becoming a standard go-to for VR haptics, but still seem too much of a stretch for daily AR use. Force-feedback exoskeletons, which stop physical movements, are much bulkier and are even more of a stretch currently.

There’s no telling what the prevailing AR wearable will be in the future, but whatever it is, it’s going to have to be both light and useful—two aspects EMS seems to nail fairly well out of the gate.

The post Researchers Electrically Stimulate Muscles in Haptic Designed for Hands-free AR Input appeared first on Road to VR.

Facebook Open-sources ‘Detectron’ Computer Vision Algorithm for AR Research

Facebook announced this week the open-sourcing of Detectron, the company’s platform for computer vision object detection algorithm based on a deep learning framework. The company says that its motive for opening up the project is to accelerate computer vision research, and that teams within Facebook are using the platform for a variety of applications, including augmented reality.

In my recent article detailing the three biggest challenges facing augmented reality today, I noted that real-time object classification was one of the biggest hurdles:

…it’s a non-trivial problem to get computer-vision to understand ‘cup’ rather than just seeing a shape. This is why for years and years we’ve seen AR demos where people attach fiducial markers to objects in order to facilitate more nuanced tracking and interactions.

Why is it so hard? The first challenge here is classification. Cups come in thousands of shapes, sizes, colors, and textures. Some cups have special properties and are made for special purposes (like beakers), which means they are used for entirely different things in very different places and contexts.

Think about the challenge of writing an algorithm which could help a computer understand all of these concepts, just to be able to know a cup when it sees it. Think about the challenge of writing code to explain to the computer the difference between a cup and a bowl from sight alone.

I also talked about how ‘deep learning’ techniques—which involve ‘training’ a computer to interpret what it sees, rather than programming detection by hand—are one potential answer to the problem of real-time object classification. Facebook this week has open-sourced their own object detection algorithm in a move which could accelerate development of systems capable of the sort of real-time object classification that could make augmented reality truly useful.

SEE ALSO
Facebook Teases "breakthrough technologies" Coming to New Oculus Products, Tours R&D Lab

Augmented reality that actually interacts with the world around us without being pre-programmed for specific environments needs to have a cursory understanding of what’s in our immediate vicinity. For example, if you’re wearing AR glasses and want to be able to project the oven temperature above the oven, along with an AR list floating on your refrigerator to show what food you’re almost out of, your glasses need to know what an oven and a refrigerator look like; a tremendously challenging task given the wide range of ovens and refrigerators, and the places in which they reside.

What object classification looks like through the lens of a deep learning algorithm | Image courtesy Hu et al

Facebook’s AI research team, among others, has been working on this problem of object detection by using deep learning to give computers the ability to reach conclusions about what objects are present in a scene. The company’s object detection algorithm, based on the Caffe2 deep learning framework, is called Detectron, and it’s now available for anyone to experiment with, hosted here on GitHub. Facebook hopes that open-sourcing Detectron will enable computer vision researchers around the world to experiment with and continue to improve the state of the art.

“The goal of Detectron is to provide a high-quality, high-performance codebase for object detection research. It is designed to be flexible in order to support rapid implementation and evaluation of novel research,” the project’s GitHub page reads.

The algorithms examine video input and are able to make guesses about what discrete objects comprise the scene. Research projects like Detecting and Recognizing Human-Object Interactions (Gkioxari et al), have used Detectron as a foundation for understanding human actions performed with objects in an environment, a step in the right direction toward helping computers understand enough about what we’re doing to be able to offer valuable information on the fly.

Image courtesy Gkioxari el al

Detectron is also used internally by Facebook outside of AI research; “teams use this platform to train custom models for a variety of applications including augmented reality and community integrity,” the company wrote in the announcement of Detectron’s open-sourcing.

Exactly which teams would be using Detectron for augmented reality isn’t clear, but one obvious guess is Oculus, whose chief scientist, Michael Abrash, recently spoke at length about how and when augmented reality will transform our lives.

The post Facebook Open-sources ‘Detectron’ Computer Vision Algorithm for AR Research appeared first on Road to VR.

Qualcomm’s Snapdragon 845 to Improve AR/VR Rendering & Reduce Power Consumption “by 30%”

Qualcomm introduced it’s new Snapdragon 845 mobile processor at the company’s tech summit in Hawaii this week, which Qualcomm says improves mobile AR/VR headset (branded to as ‘XR’, or ‘eXtended reality’) performance up to 30 percent in comparison to its predecessor, Snapdragon 835.

According to a company press release, Qualcomm’s Snapdragon 845 system-on-chip (SoC) will house the equally new Adreno 630 mobile GPU that aims to make entertainment, education and social interaction “more immersive and intuitive.”

The company says their new camera processing architecture and Adreno 630 GPU will help Snapdragon 845 deliver “up to 30 percent power reduction for video capture, games and XR applications compared to the previous generation,” and up to 30% improved graphics/video rendering.

The Snapdragon 845 also boasts the ability to provide room-scale 6 degrees-of-freedom (6 DoF) tracking with simultaneous localization and mapping (SLAM), and also includes the possibility for both 6 DoF hand-tracking and 6 DoF controller support. The company says it will support VR/AR displays up to 2K per eye at 120Hz.

image courtesy Qualcomm

The new SoC, which no doubt is destined to find its way to the next generation of flagship smartphones and dedicated standalone AR/VR headsets, will feature what the company calls “Adreno foveation,” which has the possibility for a number of processing power-saving methods including tile rendering, eye tracking, and multiView rendering.

As Qualcomm’s third-generation AI mobile platform, Snapdragon 845 is also said to provide a 3x improvement in overall AI performance of the prior generation SoC, something that aims to make voice interaction easier. The benefits to both augmented and virtual reality are clear, as virtual text usually requires you to ‘hunt and peck’ with either motion controllers or gaze-based reticles, an input style that oftentimes tedious and imprecise.

The company says the Snapdragon 845 also improves voice-driven AI with improved “always-on keyword detection and ultra-low-power voice processing,” making it so users can interact with their devices using their voice “all day.”

Check out the full specs of Qualcomm’s new Snapdragon 845 here.

The post Qualcomm’s Snapdragon 845 to Improve AR/VR Rendering & Reduce Power Consumption “by 30%” appeared first on Road to VR.

Video Gives Us a Peek Into the Future of How AR and VR Will Work Together

We’re all waiting for the day when you can put on a singular, tether-free headset and experience all the ‘R’s that VR/AR/MR have to offer—and NormalVR, a “small but focused” group of passionate immersive media creators just gave us a peek into what’s possible when you mash it all together.

Known recently for their whimsical open source keyboard (and magnificently weird blobby-guy), Normal is a group of remotely-located developers that are using their own technology to make developing from separate locations an easier experience. By their own admission, they aren’t entirely sure what they’re doing, but it’s clear from the video that surfaced yesterday that they’re hitting on some very big ideas and executing them with serious flair.

In a recent tweet, the group shows what at first appears to be another fun video demonstrating the potential of creating apps with Apple’s recently released ARkit. But this is more than just a dancing hotdog-guy. Zooming out, we see the blobby-guy avatar is in fact controlled by a person using an HTC Vive standing just out of frame, creating a digital sunflower with some species of art program like Tilt Brush (2016).

Why is this so important? Normal is building this on readily available, cost-effective hardware and it seems to work perfectly.

As a launchpad for universal Windows apps that are designed to work across HoloLens and the company’s wallet-friendly fleet of VR headsets, Microsoft’s Windows Mixed Reality Platform is taking the first baby steps into making sure there’s a strong base of apps in the AR/VR shared ecosystem. More important to the scope of the article, Microsoft also spearheaded a neat way to turn your DSLR into an AR capture device so you can see what goes on in the digital realm, but this requires you to buy an extra HoloLens, which at $3000 is a pretty steep price to pay.

By picking up an iPhone and a VR headset like the Oculus Rift, which now costs even lower than ever at $400, developers can start building the future of games and apps for all immersive platforms—hopefully coming sooner rather than later.

The post Video Gives Us a Peek Into the Future of How AR and VR Will Work Together appeared first on Road to VR.

Facebook is Researching Brain-Computer Interfaces, “Just the Kind of Interface AR Needs”

Regina Dugan, VP of Engineering at Facebook’s Building 8 skunkworks and former head of DARPA, took the stage today at F8, the company’s annual developer conference, to highlight some of the research into brain-computer interfaces going on at the world’s most far-reaching social network. While it’s still early days, Facebook wants to start solving some of the AR input problem today by using tech that will essentially read your mind.

6 months in the making, Facebook has assembled a team of more than 60 scientists, engineers and system integrators specialized in machine learning methods for decoding speech and language, in optical neuroimaging systems, and “the most advanced neural prosthesis in the world,” all in effort to crack the question: How do people interact with the greater digital world when you can’t speak and don’t have use of your hands?

Facebook’s Regina Dugan, image courtesy Facebook

At first blush, the question may seem like it’s geared entirely at people without the use of their limbs, like those with Locked-in Syndrome, a malady that causes full-body paralysis and inability to produce speech. But in the realm of consumer tech, making what Dugan calls even a simple “brain-mouse for AR” that lets you click a binary ‘yes’ or ‘no’ could have big implications to the field. The goal, she says, is direct brain-to-text typing and “it’s just the kind of fluid computer interface need for AR.”

While research regarding brain-computer interfaces has mainly been in service of these sorts of debilitating conditions, the overall goal of the project, Dugan says, is to create a brain-computer system capable of letting you type 100 words per minute—reportedly 5 times faster than you can type on a smartphone—with words taken straight from the speech center of your brain. And it’s not just for the disabled, but targeted at everyone.

“We’re talking about decoding those words, the ones you’ve already decided to share by sending them to the speech center of your brain: a silent speech-interface with all the flexibility and speed of voice, but with the privacy of typed text,” Dugan says—something that would be invaluable to an always-on wearable like a light, glasses-like AR headset.

image courtesy Facebook

Because basic systems in use today now don’t operate in real-time and require surgery to implant electrodes—a giant barrier we’ve yet to surmount—Facebook’s new team is researching non-invasive sensors based on optical imaging that Dugan says would need to sample data at hundreds of times per second and precise to millimeters. A tall order, but technically feasible, she says.

This could be done by bombarding the brain with quasi-ballistic photons, light particles that Dugan says can give more accurate readings of the brain than contemporary methods. When designing a non-invasive optical imaging-based system, you need light to go through hair, skull, and all the wibbly bits in between and then read the brain for activity. Again, it’s early days, but Facebook has determined optical imaging as the best place to start.

The big picture, Dugan says, is about creating ways for people to even connect across language barriers by reading the semantic meanings of words behind human languages like Mandarin or Spanish.

Check out Facebook’s F8 day-2 keynote here. Regina Dugan’s talk starts at 1:18:00.

The post Facebook is Researching Brain-Computer Interfaces, “Just the Kind of Interface AR Needs” appeared first on Road to VR.