Facebook Reality Labs: Forschungseinblicke in die Arbeit an lebensechten Avataren

Die Faceboook Reality Labs haben eine neue einjährige Blog-Serie gestartet, in welcher sie schrittweise Einblicke in die Forschungsarbeit der AR- und VR-Entwicklungsabteilen geben. Der erste Blog-Eintrag beschäftigt sich mit den lebensechten Avataren des Teams in Pittsburgh. Darin wird die Codec-Avatar-Technologie vorgeführt, welche Echtzeitgespräche dank lebensechter Alter Egos in virtuellen Gefilden ermöglichen soll.

Facebook Reality Labs – Lebensechte Avatare für AR und VR

Die Codec-Avatar-Technologie soll zukünftig innerhalb von sozialen VR-Umgebungen Grenzen überwinden. Dank lebensechten Avataren wird die Kommunikation im virtuellen Raum in Echtzeit ermöglicht, wodurch kein Unterschied mehr zwischen Gespräch von Angesicht zu Angesicht gegenüber digitalem Treffen entstehen soll. Möglich macht dies eine 3D-Capture-Technologie in Kombination mit KI-Systemen. Um diese Zukunftstechnologie umzusetzen, arbeitet das Pittsburgh-Team der Facebook Reality Labs an der Fortentwicklung der zukunftsträchtigen Repräsentation in der VR.

So schreiben die Verantwortlichen im eigenen Blog:

Codec-Avatare sind heute noch ein aktives Forschungsprojekt, aber es könnte die Art und Weise, wie wir uns morgen über VR-Headsets und AR-Brillen verbinden, radikal verändern. Es geht nicht nur um hochmoderne Grafiken oder weiterentwickeltes Motion-Tracking. Es geht darum, die Interaktion mit anderen Menschen in der virtuellen Realität so natürlich und mühelos erscheinen zu lassen, so als würde die Person direkt vor dir stehen. Die größte Herausforderung besteht darin, authentische Interaktionen in künstlichen Umgebungen zu erzeugen.

Auch ein soziales Präsenzgefühl muss dabei zum Tragen kommen, um eine realistische Kommunikation zu gewährleisten. Um dieses erfolgreich umzusetzen, setzt das Team auf zwei Tests zur Erfüllung der Kriterien. Beim sogenannten “Ego-Test” muss die jeweilige Person in der Selbstwahrnehmung mit seinem digitalen Ebenbild zufrieden sein, um sich damit zu identifizieren. Im zweiten Schritt, dem “Mutter-Test”, muss das Alter Ego derartig realistisch sein, sodass selbst die Mutter der Person mit dem Avatar komfortabel, authentisch und uneingeschränkt kommunizieren würde, genauso wie im echten Leben. Die Fremdwahrnehmung des Avatars muss also ebenso überzeugen. Das Vorbild für soziale Kommunikation innerhalb der VR ist nach wie vor das sogenannte “Holodeck” bekannt aus der Sci-Fi-Serie Star Trek.

Die Codec-Avatare sollen diesem Ziel einen gewaltigen Schritt näherkommen, um das vorgelegte Ziel langfristig zu erreichen:

Das wahre Potenzial von Augmented Reality und Virtual Reality ist, dass es uns Zeit mit wem auch immer wir wollen verbringen lässt und sinnvolle Beziehungen aufbauen kann, unabhängig davon, wo Menschen leben.

Dafür müssen selbst die kleinsten Mikrobewegungen und Mimiken, die einen Menschen ausmachen erfasst werden. Der Schlüssel zur realistischen Umsetzung liegt in der Erfassung von jeglichen physischen Details und Eigenarten der Individuen.

Um dies zu ermöglichen, setzt das Forschungsteam derzeit auf komplexe Aufnahmestudios mit zahlreichen Kameras, Mikrofonen und weiterer Hardware zum En- und Decodieren. Während eines der Studios alleine für die Aufnahmen der Gesichter genutzt wird (siehe Titelbild), ist das Zweite für Körperaufnahmen zuständig.

Facebook-Reality-Labs-Codec-Avatar-social-VR-lifelike

Image courtesy: Facebook Reality Labs

Dabei werden dreidimensionale Profile der jeweiligen Personen erstellt, die enorme Datenmengen erzeugen. Pro Kamera werden 1 GB pro Sekunde aufgezeichnet, um die zahlreichen subtilen Bewegungen der Testpersonen zu erfassen. Die aufgezeichneten Daten sollen KI-Systemen dabei helfen, zukünftig leichter und schneller zu arbeiten. Langfristig sollen jedoch auch tragbare VR-Brillen imstande sein, die realistischen Avatare aufzunehmen und zu erzeugen.

Die Technologie ist bei Weitem noch nicht ausgereift und benötigt laut Verantwortlichen noch mehrere Jahre, bis sie für den Endkonsumentenmarkt geeignet ist. Dennoch arbeitet das Facebook-Team derzeit kontinuierlich an einer möglichen Umsetzung.

Neben den technischen Herausforderungen stehen zudem auch ethische Komplikationen, wie beispielsweise die Nutzung für Deepfakes zur Diskussion. Um ein derartiges Ausnutzen der Technologie zu verhindern, werden derzeit Diskussionen über Sicherheitssysteme durch Authentifizierung für die lebensechten Avatare geführt. Dadurch würden die digitalen Alter Egos personalisiert und wären nur von den tatsächlichen Besitzern nutzbar.

(Quellen: Facebook Reality Labs | Oculus)

Der Beitrag Facebook Reality Labs: Forschungseinblicke in die Arbeit an lebensechten Avataren zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

A Vision of the Future: Facebook Reality Labs Working on Lifelike Avatars

Every year during Oculus Connect, Oculus Chief Scientist Michael Abrash holds a session to discuss his thoughts about virtual reality (VR), it’s future and how the industry might get there – these are in-depth but worth a watch. All of the company’s R&D used to be under Oculus Research until last year when it was renamed Facebook Research Labs. Every so often there a little sneak peeks at what the labs are working on but now they’re going to be even more transparent, thanks to a new blog series.

Facebook Reality Labs

Abrash made the announcement via the Oculus Blog today, revealing that it’ll be a year-long series of posts delving inside the various Facebook Reality Labs, highlighting a different team and what they’re working on for the future.

“I expect these blog posts to be markers on the journey to the AR/VR future,” notes Abrash. “Over the coming months, you’ll see deep dives into optics and displays, computer vision, audio, graphics, haptic interaction, brain/computer interface, and eye/hand/face/body tracking.”

Hopefully revealing all sorts of interesting developments and breakthroughs the teams are making, the first blog post focuses on lifelike avatars and the connections people make inside a digital world. The project is called Codec Avatars which is run by Yaser Sheikh, the Director of Research at Facebook Reality Labs in Pittsburgh.

Seeking to overcome the challenges of distance between people, the project uses a combination of 3D capture technology and AI systems to build realistic avatars of users quickly and simply, a stepping stone towards digital online interaction that’s as normal as the real world.

Facebook Reality LabsThe team at FRL Pittsburgh have been working on this challenge for a number of years, with Sheikh joining Facebook in 2015. Their work was showcased during F8 2018 with two realistic digital people animated in real time. Since then: “We’ve completed two capture facilities, one for the face and one for the body,” says Sheikh. “Each one is designed to reconstruct body structure and to measure body motion at an unprecedented level of detail. Reaching these milestones has enabled the team to take captured data and build an automated pipeline to create photorealistic avatars.”

The capture system FRL Pittsburgh has developed gathers 180 gigabytes of data per second, thanks to hundreds of cameras – each camera captures data at a rate of 1 GB per second. A proprietary algorithm then uses the data to create a unique avatar for the individual scanned.

The type of technology FRL Pittsburgh is using isn’t going to rapidly become available for the everyday consumer to put themselves into VR, but it certainly showcases the steps needed to be taken to eventually get there. Do read the post as it is fairly extensive, and as Abrash details more projects Facebook Reality Labs is working on VRFocus will let you know.

Facebook Shows Off Research Aiming to Deliver Truly Realistic Avatars

Facebook Reality Labs (FRL) chief scientist Michael Abrash believes AR and VR will be the primary way people work, play, and connect in the future. Abrash regularly speaks about when he expects these fundamental milestones in technology to occur, but in a new year-long blog series he wants to drill down to exactly how it’ll happen.

Abrash today revealed in a blog post more about the company’s research surrounding lifelike avatars, something undertaken by the Pittsburgh branch of the company’s skunkworks, Facebook Reality Labs.

Dubbed ‘Codec Avatars’, the Pittsburgh office is using what they call “groundbreaking 3D capture technology and AI systems” to generate lifelike virtual avatars that could provide the basis of a quick and easy personal avatar creator of the future.

The idea, FRL Pittsburgh’s director of research Yaser Sheikh says, is to close physical distances and make creating social connections in virtual reality as “natural and common as those in the real world.”

Image courtesy Facebook

“It’s not just about cutting-edge graphics or advanced motion tracking,” Sheikh says. “It’s about making it as natural and effortless to interact with people in virtual reality as it is with someone right in front of you. The challenge lies in creating authentic interactions in artificial environments.”

It boils down to achieving what the team calls ‘social presence’, and vaulting the uncanny valley to deliver acceptably realistic avatars is something they’ve been working on for years; the team calls the process “passing the ego test and the mother test.”

“You have to love your avatar and your mother has to love your avatar before the two of you feel comfortable interacting like you would in real life. That’s a really high bar,” Sheikh maintains.

A demonstration showing two VR users talking with lifelike avatars gives an interesting look at what the future of VR avatars could be.

 

The company says at this point these sorts of real-time, photorealistic avatars require quite the gear to achieve. The lab’s two capture studios—one for the face, and one for the body—are admittedly both “large and impractical” at this point.

The ultimate goal however is to achieve all of this through lightweight headsets, although FRL Pittsburgh currently uses its own prototype Head Mounted Capture systems (HMCs) equipped with cameras, accelerometers, gyroscopes, magnetometers, infrared lighting, and microphones to capture the full range of human expression.

Image courtesy Facebook

“Codec Avatars need to capture your three-dimensional profile, including all the subtleties of how you move and the unique qualities that make you instantly recognizable to friends and family,” the company says. “And, for billions of people to use Codec Avatars every day, making them has to be easy and without fuss.”

Using a small group of participants, the lab captures 1GB of data per second in effort to create a database of physical traits. In the future, the hope is consumers will be able to create their own avatars without a capture studio and without much data either.

 

At the moment volumetric captures last around 15 minutes, and require a large number of cameras to create the most photorealistic avatars possible. The lab then plans to use these captures to train AI systems so consumers could then quickly and easily build a Codec Avatar from just a few snaps or videos.

Humans come in plenty of different shapes and sizes though, which will be its own challenge to surmount, FRL research scientist Shoou-I Yu says.

“This has taught me to appreciate how unique everyone is. We’ve captured people with exaggerated hairstyles and someone wearing an electroencephalography cap. We’ve scanned people with earrings, lobe rings, nose rings, and so much more,” says Yu. “We have to capture all of these subtle cues to get it all to work properly. It’s both challenging and empowering because we’re working to let you be you,” Yu continues.

SEE ALSO
Leaked Zuckerberg Email Reveals Facebook's XR Strategy, 'Our goal is not only to win, but to accelerate its arrival'

There are still plenty of challenges to address on the way there, Sheikh maintains. One big problem looming on the horizon is ‘deepfakes’, or the act of recreating a person’s appearance or voice to deceive others.

“Deepfakes are an existential threat to our telepresence project because trust is so intrinsically related to communication,” says Sheikh. “If you hear your mother’s voice on a call, you don’t have an iota of doubt that what she said is what you heard. You have this trust despite the fact that her voice is sensed by a noisy microphone, compressed, transmitted over many miles, reconstructed on the far side, and played by an imperfect speaker.”

Sheikh maintains we’re still years away from seeing this level of avatar photorealism, although the lab is currently exploring the idea of securing future avatars through an authentic account, as well as several security and identity verification options for future devices.


Abrash says we’ll be getting more blog posts surrounding optics and displays, computer vision, audio, graphics, haptic interaction, brain/computer interface, and eye/hand/face/body tracking.

The post Facebook Shows Off Research Aiming to Deliver Truly Realistic Avatars appeared first on Road to VR.

Facebook: Neues Patent für KI-Finger-Tracking-Armband angemeldet

Die Facebook Reality Labs meldeten kürzlich ein Patent für eine neue innovative Eingabemethode an, welche die Controller in der Zukunft ersetzen könnte. Die Forscher arbeiten an einem Armband zum Hand- und Finger-Tracking, das elektrische Signale im Handgelenk erfasst und dadurch komplett ohne externe Kameras auskommt. Dank maschinellem Lernen sollen die Reizsignale daraufhin verarbeitet und in die entsprechenden Fingerpositionen konvertiert werden.

Facebook Reality Labs – Neues Patent für KI-Finger-Tracking-Armband angemeldet

Ein kürzlich eingereichtes Patent verweist auf ein aktuelles Projekt der Facebook Reality Labs, dass die Zukunft der Eingabemethoden massiv verändern könnte. Das Tracking-Armband zur Bewegungserfassung durch Impedanzmessung soll als tragbares geschlossenes System am Handgelenk des Nutzers angebracht werden.

Facebook-Reality-Labs-Hand-Finger-Tracking-Armband-Patent

Image courtesy: Facebook Reality Labs

Die verschiedenen verbauten Sensoren erfassen die elektrischen Signale, die vom Arm oder Hangelenk des Nutzers ausgehen, und leiten diese an eine Recheneinheit weiter. Dort werden die entsprechend entstehenden Hand- und Fingerbewegungen errechnet und in digitale Handpositionen konvertiert. Für diesen Vorgang kommt ein KI-Lernmodell zum Einsatz, das auf maschinellem Lernen basiert.

Zudem werden zwei verschiedene Versionen des Eingabegeräts beschrieben. In der ersten Variante wird ein aktives Signal durch das Handgelenk gesendet. Basierend auf der Veränderung dieses Signals durch Muskelkontraktion bei Bewegungen, soll die jeweilige Endposition bestimmt werden. In der zweiten Variante wird auf dieses aktive Signal verzichtet, um stattdessen direkt die Impedanz des Arms zu erfassen.

Facebook-Reality-Labs-Hand-Finger-Tracking-Armband-Patent

Image courtesy: Facebook Reality Labs

Die neue Eingabemethode weist viel Potenzial auf. Durch die präzise und controllerfreie Steuerung könnten völlig neue Interaktionswege innerhalb der VR eröffnet werden, die wiederum für ein noch stärkeres Immersionsgefühl sorgen. Zudem bringt der Verzicht auf externe Kamerasensoren eine größere Bewegungsfreiheit mit sich, die besonders kabellose VR-Brillen, wie die kommende Oculus Quest noch attraktiver machen dürften. Ein weiterer Vorteil liegt in der Lösung zum Erfassen von Handbewegungen hinter dem Körper, was derzeit noch teilweise Probleme aufweist.

Neben Facebook arbeiten derzeit auch HTC sowie weitere Unternehmen, wie CRTL-labs an ähnlichen Tracking-Lösungen:

(Quelle: Facebook Patent | Upload VR | Video: CTRL-labs YouTube)

Der Beitrag Facebook: Neues Patent für KI-Finger-Tracking-Armband angemeldet zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Report: Facebook’s Upcoming AR Glasses Much Less Bulky Than Hololens Or Magic Leap

michael abrash ar vr oculus connect 5

Business Insider is reporting that a source has tried on a prototype of the AR glasses under development at Facebook. The source claims the prototype “resembled traditional glasses much more closely than the bulky AR headsets offered by Microsoft (the HoloLens) or Magic Leap.”

“They look like really high-end glasses,” the source said, adding “it’s light enough to not feel heavy on your face, and it wasn’t light enough to feel like you could just sit down and break them.”

A New AR Product Team

The report notes that “hundreds” of employees were moved from Facebook Reality Labs (formerly Oculus Research) to a new team focused on delivering AR products. The team is jointly lead by Michael Abrash and Andrew ‘Boz’ Bosworth. Abrash will continue to also lead Facebook Reality Labs as Chief Scientist.

Facebook told Business Insider the decision was made to move AR hardware development “out of research, now that we are closer to shipping”.

The publication’s source claimed that the release was pushed from 2020 to 2022, however Facebook denied this, stating “We have an exciting AR road map that includes multiple products, so your intel on release dates is wrong”.

It’s unclear whether the reference to “multiple” products means a tiered product lineup (like Go and Quest in the VR market) or simply successor products.

Facebook’s Past Comments

Since the acquisition of Oculus in 2014, Mark Zuckerberg has hinted at wanting to launch AR hardware in the far future.

In 2016 the CEO told The Verge that AR would “maybe” be where VR is in “5 or 10 years”, but confirmed that Oculus was researching VR.

Zuckerberg was more specific in a 2017 interview with Recode, stating:

“I think everyone would basically agree that we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years, or something like that. But we’re not likely to be able to deliver the experience that we want right now.”

In October of last year Facebook’s Head of AR Fiscus Kirkpatrick directly confirmed the company was working on AR glasses in an interview with TechCrunch.

Oculus Connect 5

At Oculus Connect 5 in October of last year, Chief Scientist Michael Abrash gave a detailde presentation on his views of the future prospects of AR and VR. During this talk, Abrash revealed that Facebook’s investment in AR research had “ramped up a great deal” in the past two years.

Abrash stated that since no off the shelf display technology was good enough for AR, the company had to develop “a new display system”.

Abrash gave some specific details of what the company was targetting in terms of form factor- no more than 70 grams weight and dissappating no more than 500 milliwatts. For comparison, Magic Leap One weighs over 300 grams. Like Magic Leap, Abrash stated that the glasses would need to require a companion device for processing- “either a smartphone or puck”.

Tagged with: , , , ,

The post Report: Facebook’s Upcoming AR Glasses Much Less Bulky Than Hololens Or Magic Leap appeared first on UploadVR.

Former Microsoft Senior Researcher, Now At Facebook, Recounts Haptics Innovations

haptic revolver vr controller

Former Microsoft Senior Researcher Dr. Hrvoje Benko gave a talk entitled ‘The Future of AR Interactions’ at the International Symposium on Mixed and Augmented Reality (ISMAR) conference in October. This week the talk was uploaded on the ISMAR YouTube channel.

Dr Benko had worked at Microsoft since 2005, but moved to Facebook Reality Labs (formerly Oculus Research) in late 2017. He now leads the human computer interfaces (HCI) division there.

A Great Display Is Not Enough

A core point that Benko stressed multiple times during the talk is that a great AR display in itself is not good enough — a new input paradigm that takes advantage of spatial computing is needed.

Benko used the example of smartphones with large displays that existed before the iPhone but lacked a multitouch input interface. He pointed out how Hololens and other current AR devices unsuccessfully try to use existing input techniques.

Finger Tracking Is Not Enough

Benko explained that while finger tracking technology is rapidly progressing, humans don’t often interact with empty air — we interact with objects. The only time we tend to use our hands in empty air is when gesticulating during speech.

The lack of haptic feedback with only finger tracking, he claims, is jarring, and is unlikely to be the basis of future interfaces.

Surfaces May Be The Key

Benko pointed out that mixed reality interfaces could leverage the already existing surfaces in the environment to provide real haptic feedback.

Menus could appear on the nearest table or wall, and your fingers could manipulate the virtual UI elements on these surfaces.

This obviously requires a very advanced sensor system with a precise understanding of all the major objects in the room, as well as almost perfect finger tracking.

‘Haptic Retargeting’

Facebook Open-Sources its AI Platform DeepFocus to Improve VR Visual Rendering

Facebook Reality Labs (FRL) has been experimenting with a number of ways to improve the realism of virtual reality (VR), through both hardware and software means. During the Facebook Developers Conference (F8) 2018 in May the company unveiled Half Dome, a prototype headset with a varifocal mechanism. Now the lab has revealed DeepFocus, an AI-powered platform designed to render blur in real time and at various focal distances.

Deep Focus demo

What Deep Focus and Half Dome are both trying to achieve is something our eyes do naturally, a defocus effect. As the gif above demonstrates, when our eyes look at objects at different distances whatever they’re not focused on is blurred and out of focus. While this may seem simple, trying to replicate the effect in VR isn’t exactly easy, but its creation has a whole bunch of use cases for the technology.

The first is the goal of truly realistic experiences inside VR. “Our end goal is to deliver visual experiences that are indistinguishable from reality,” says Marina Zannoli, a vision scientist at FRL via the Oculus Blog. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world, and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry.”

Another important aspect of DeepFocus is the comfort. The more natural VR looks and feels, the easier it is to use. “This is about all-day immersion,” says Douglas Lanman, FRL’s Director of Display Systems Research. “Whether you’re playing a video game for hours or looking at a boring spreadsheet, eye strain, visual fatigue and just having a beautiful image you’re willing to spend your day with, all of that matters.”

Oculus Half Dome

While FRL is currently using DeepFocus with Half Dome the software has been designed to be platform agnostic, which is why the DeepFocus team is open-sourcing the work and data set for engineers developing new VR systems, vision scientists, and other researchers studying perception. As further updates from FRL are released, VRFocus will let you know.

Facebook Wins Patent For Human-Eye ‘Retinal’ Resolution VR Headset

Facebook Wins Patent For Human-Eye ‘Retinal’ Resolution VR Headset

Facebook has been awarded a patent for a head mounted display (HMD) which combines a large low resolution display and small high resolution display projected to where the user’s eye is pointed to achieve ‘retinal’ resolution.

‘Retinal’ or “retina” is a term often used to describe angular resolution which at least matches that of the center of the human eye. Facebook is the company behind the Oculus brand of VR headsets and services. Originally purchased as a startup in 2014, Oculus is now a division of Facebook. This patent’s inventors are all listed as residents of Washington state, suggesting this idea comes from Facebook Reality Lab which has its main office there.

Two Displays Per Eye, Merged

The patent describes a headset which has eye tracking-driven foveated rendering. For those unfamiliar, foveated rendering is a process which renders most of the view into a virtual world at lower resolution except for the exact area directly in front of where the user’s eye is pointed. That area in front of the eye — where humans perceive the greatest detail — is rendered at a higher resolution.

With this patent, instead of the image being sent to one display per eye, as in most headsets, the high resolution area is instead sent to a a second much smaller display called the ‘inset display’. A steerable mirror and optical combiner then project this display into the lens, at the position the user’s eye is pointed. Low resolution parts of the virtual world — parts not directly in front of the eyeball — go to the main display and are magnified directly by the lens.

The result would a display that combines these low and high-resolution panels to provide an experience that roughly matches the level of detail that the human eye can resolve. If the eye tracking is good enough, the user would not even notice that the headset has variable resolution.

Isn’t This Varjo?

This patent may sound familiar if you’ve heard of the Finland-based company Varjo. Varjo’s current prototype also features an inset and background display, but the high resolution area is locked to the center of the display — it does not yet adapt to eye position. But Varjo’s end goal is to build a headset that sounds surprisingly similar to what Facebook describe in this patent, steering the display with mirrors.

Varjo has also been awarded a patent for this technique. Facebook applied for its patent before Varjo’s, but Varjo’s was granted before Facebook’s. It is not clear how much these techniques differ from one another.

The ‘Inset’ Microdisplay

One diagram in the patent’s supporting documents mentions the resolution and potential supplier of the inset display. It is marked as a 1920×1200 microdisplay from eMagin. This is likely the eMagin WUXGA, which eMagin claims is the highest resolution production OLED microdisplay.

OLED microdisplays use a more costly production method compared to regular OLED panels used in VR today, but are physically much smaller and consume less power. The peripheral display’s exact resolution is not listed, but is described “low compared to other displays”.

Staggering Angular Resolution

In another diagram is marked the vertical field of view of the projection of the inlet display – 17 degrees. Given that we know its vertical resolution is 1200, that could mean that it would provide an average vertical angular resolution of roughly 70 pixels per degree (PPD). Oculus Go, the company’s current highest resolution headset, has an angular resolution of roughly 15 PPD.

Achieving this kind of PPD is not yet possible with traditional VR display systems, as there are no regular OLED displays with even close to the resolution that would be required. This resolution requirement would get progressively higher for higher field of view optics, to the point of impracticality. Even the hypothetical 4000×4000 per eye headset with 140 degrees field of view predicted in 2016 to exist by 2021 by Oculus Chief Scientist Michael Abrash would have only around 30 PPD. The current highest resolution VR-suitable OLED on the market is Samsung’s 1440×1600 panel, used in the HTC Vive Pro and Samsung Odyssey series.

Promising, But Just A Patent

It’s important to note that companies patent techniques all time and most never make it to a consumer product. Color microdisplays of this resolution can cost thousands of dollars, and it’s unclear whether mass production would bring them down to an acceptable cost. There may also be other challenges in manufacturing the complex optical combination system this patent describes.

Tagged with: , , , ,

The post Facebook Wins Patent For Human-Eye ‘Retinal’ Resolution VR Headset appeared first on UploadVR.

Facebook Built A Camera System To Capture Mirrors

Facebook Built A Camera System To Capture Mirrors

SIGGRAPH is just around the corner so that means research groups like Facebook Reality Lab are starting to reveal some of their cutting edge work. The latest from FRL (formerly Oculus Research) demonstrates a method to capture the appearance of mirrors from the real world.

The new research opens up the door to capturing the look of complex real world environments which feature mirrors and reflective surfaces. Mirrors are the enemy of a number of computer vision applications and if Facebook’s research could be used to help solve that problem it might ultimately lead to a number of useful applications.

“Mirror and glass surfaces are essential components of our daily environment yet notoriously hard to scan. Starting from the simple idea of robustly detecting a reflected planar target, we demonstrate a complete system for robust and accurate reconstruction of scenes with mirrors and glass surfaces,” the report reads. “Given the ease of capture, our system could also be used to collect training data for learning-based approaches to detect reflective surfaces. Besides our core application of scanning indoor scenes, we envision multiple extensions and applications.”

The system finds mirrors by looking for a target that is on the camera rig, then refines the shape of the mirror by analyzing various features of the image.

“Our key idea is to add a tag to the capture rig that can only be observed when the camera faces a mirror or glass surface,” the report reads.

The most obvious application is easier to capture environments that are more realistic to experience in VR. The research could enable representations of real world locales to more realistically mix with digital elements, like your avatar, even if there are a number of mirrors and reflective surfaces.

“Mirrors are typically skirted around in 3D reconstruction, and most earlier work just ignores them by pretending they don’t exist,” Research Scientist Thomas Whelan said in an Oculus blog post. “But in the real world, they exist everywhere and ruin the majority of reconstruction approaches. So in a way, we broke the mold and tackled one of the oldest problems in 3D reconstruction head-on.”

Tagged with:

The post Facebook Built A Camera System To Capture Mirrors appeared first on UploadVR.

New Mirror Reconstruction Technology Revealed by Facebook Reality Labs

In what looked like the start of things to come for the Oculus brand, back in May it was announced that Oculus Research, the R&D division of the company was being rebranded as Facebook Reality Labs. Today, the new lab has made its first big announcement since the name change, introducing a new mirror reconstruction technology.

Facebook Reality Labs

When creating a realistic scene one of the hardest elements to get right are mirrors and glass panes, making not only reflections look true but also the way they scatter light in a scene. So Facebook Reality Labs has created a new fully-automated pipeline to reconstruct mirrors and other reflective flat surfaces, which it’ll be discussing at SIGGRAPH 2018 this month.

Detailing the research on the Oculus blog, the team highlights the advances made over existing 3D scanning systems.

“If you look around the mirror section of a home decor shop, you’ll immediately see a wide range of shapes and sizes—almost no two mirrors are the same!” says Research Scientist Thomas Whelan. “It became obvious to us very early on that we needed a general solution that didn’t make too many assumptions about what was being reconstructed. We really wanted a solution that just worked in real-world environments because that’s where it’s most useful.”

Facebook Reality Labs

“The sheer variety of mirror types and shapes was quite stunning in the beginning,” Research Scientist Julian Straub went onto note. “Designing a system that would be able to handle most or all mirror shapes and sizes was the main goal. Then we realized that the system would also work with glass surfaces with a minor additional classification step. That was pretty cool.”

They learnt that by identifying mirrored surfaces, they could then re-render a scene with correct geometry and reflections. This led to the creation of a capture rig which included an infrared depth capture camera, an RGB color camera, two SLAM cameras, an infrared projector and a special target the system could identify.This allowed a mirrors boundary to be identified even without a frame.

Check out the video below for a more hands-on description, or head over to SIGGRAPH 2018 for the full in-depth talk. This is just one of the ways Facebook Reality Labs is helping developers deliver more immersive content, as further advances are made VRFocus will keep you updated.