Meta Presents Retinal Resolution & Ultra Bright HDR Prototype Headsets

Meta presented two prototype headsets each solving a different aspect of its goal to make VR “indistinguishable from reality”.

Neither is a practical device intended to be made into a product. Instead they simply demonstrate the effect & feeling of maxing out each of these aspects of VR display systems.

Butterscotch: Retinal Resolution

‘Retinal’ or “retina” is a term often used to describe angular resolution which at least matches that of the human eye. The generally accepted threshold is 60 pixels per degree. No consumer VR headset yet comes close to this – Quest 2 reaches around 20 pixels per degree, while the $1990 Varjo Aero reaches 35 pixels per degree. Varjo’s $5500 business-focused headsets surpass retinal resolution, but only in a tiny area in the center of your view.

Butterscotch is a research prototype achieving 55 pixels per degree. Achieving this required more than just a higher resolution display – Meta says it developed “a new hybrid lens that would fully resolve higher resolution”. CEO Mark Zuckerberg first teased what appears to be Butterscotch back in October.

The downside of Butterscotch is it has a very narrow field of view – only half that of Quest 2. Meta says it’s also heavy and bulky. The purpose of Butterscotch is to demonstrate and research the feeling of retinal resolution, not to be a practical product.

Starburst: Ultra Bright HDR

Starburst is a prototype headset demonstrating extremely bright displays with high dynamic range (HDR). Zuckerberg described bright HDR as “arguably the most important dimension of all” of reaching VR indistinguishable from reality.

Luminance is measured in nits. A traditional 60-watt incandescent bulb reaches around 250 nits, and a high end HDR TV reaches around 1000 nits. But the brightness outdoors dwarfs these numbers – the ambient brightness on a clear sunny day is tens of thousands of nits and direct sunlight is over 1 billion nits (hence why you shouldn’t look at it). 

Today’s Quest 2 reaches just 100 nits. Meta says Starburst reaches 20,000 nits and describes it as “one of the brightest HDR displays yet built”.

Starburst is bulky, heavy, and tethered. Zuckerberg admitted it would be “wildly impractical” to ship in a product – “we’re using it to test and for further studies so we can get a sense of what the experience feels like”.

The only known headset with HDR displays is PlayStation VR2, but Sony hasn’t yet revealed the headset’s brightness.

Meta’s Prototype Photoreal Avatars Can Now Be Generated With An iPhone

Meta’s prototype photoreal avatars can now be generated with an iPhone scan.

Facebook first showed off work on ‘Codec Avatars’ back in March 2019. Powered by multiple neural networks, they can be driven in real time by a prototype VR headset with five cameras; two internal viewing each eye and three external viewing the lower face. Since then, the researchers have showed off several evolutions of the system, such as more realistic eyes, a version only requiring eye tracking and microphone input, and most recently a 2.0 version that approaches complete realism.

The capture rig used to generate Codec Avatars until now

Previously, generating an individual Codec Avatar required a specialized capture rig called MUGSY with 171 high resolution cameras. But Meta’s latest research gets rid of this requirement, generating an avatar with a scan from a smartphone with a front facing depth sensor, such as any iPhone with FaceID. You first pan the phone around your neutral face, then again while copying a series of 65 facial expressions.

This scanning process takes 3 and a half minutes on average, the researchers claim – though actually generating the avatar (in full detail) then takes six hours on a machine with four high end GPUs. If deployed in a product, this step would likely happen on cloud GPUs, not the user’s device.

So how is it possible for what once required more than 100 cameras to now require only a phone? The trick is in the use of a Universal Prior Model “hypernetwork” – a neural network which generates the weights for another neural network – in this case the person-specific Codec Avatar. The researchers trained this UPM hypernetwork by scanning in the faces of 255 diverse individuals using an advanced capture rig, similar to MUGSY but with 90 cameras.

While other researchers have already demonstrated avatar generation from a smartphone scan, Meta claims the quality of its result is state of the art. However, the current system cannot handle glasses or long hair, and is limited to the head, not the rest of the body.

 

Of course, Meta still has a long way to go to reach this kind of fidelity in shipping products. Meta Avatars today have a basic cartoony art style. Their realism has actually decreased over time, likely to better suit larger groups with complex environments in apps like Horizon Worlds on Quest 2’s mobile processor. Codec Avatars may, however, end up as a separate option, rather than a direct update to the cartoon avatars of today. In his interview with Lex Fridman, CEO Mark Zuckerberg described a future where you might use an “expressionist” avatar in casual games and a “realistic” avatar in work meetings.

In April Yaser Sheikh, who leads the Codec Avatars team, said it’s impossible to predict how far away it is from actually shipping. He did say that when the project started it was “ten miracles away” and he now believes it’s “five miracles away”.

Meta Research: Codec Avatars 2.0 Approach Complete Realism

Meta’s researchers are approaching complete realism with Codec Avatars 2.0, their prototype VR avatars using advanced machine learning techniques.

Facebook first showed off work on ‘Codec Avatars’ back in March 2019. Powered by multiple neural networks, the avatars are generated using a specialized capture rig with 171 cameras. Once generated, they can be driven in real time by a prototype VR headset with four cameras; two internal viewing each eye and three external viewing the lower face. Since then the researchers have showed off several evolutions of the system, such as more realistic eyes and a version only requiring eye tracking and microphone input.

At MIT’s Virtual Beings & Being Virtual workshop in April Yaser Sheikh, who leads the Codec Avatars team, showed a video of the latest version of the project, described as “Codec Avatars 2.0”:

I would say a grand challenge of the next decade is to see if we can enable remote interactions that are indistinguishable from in person interactions“, Sheikh remarked.

In a paper published last year, Sheikh and his colleagues claim their newest models are smaller and more efficient than their past research, with the neural network now computing only the pixels visible to the headset. With this advancement, a Quest 2 headset is apparently able to render five avatars in realtime in the same (likely empty) scene.

Still, the company seems to have a long way to go to reach this kind of fidelity in shipping products. Meta Avatars today have a basic cartoony art style. Their realism has actually decreased over time, likely to better suit larger groups with complex environments in apps like Horizon Worlds on Quest 2’s mobile processor.

Codec Avatars may however end up as a separate option, rather than a direct update to the cartoon avatars of today. In his interview with Lex Fridman, CEO Mark Zuckerberg described a future where you might use an “expressionist” avatar in casual games and a “realistic” avatar in work meetings.

During the workshop Sheikh noted that it’s impossible to predict how far away Codec Avatars is from actually shipping. He did however say that when the project started it was “ten miracles away”, and he now believes it’s “five miracles away”.

Facebook Researchers Help Typists Match Speed Without Physical Keyboard

Researchers at Facebook developed a predictive motion model for marker-based hand tracking that could enable some typists to match their speed and accuracy with a physical keyboard while only tapping their fingers on a flat surface.

I’ve emphasized some of the most notable bits announced by Facebook in a blog post today:

To support touch typing without a physical keyboard — and without the benefit of haptic feedback from individual keys — the team had to make sense of erratic typing patterns. They adopted statistical decoding techniques from automatic speech recognition, and where speech recognition uses an acoustic model to predict phonemes from audio frames they instead use a motion model to predict keystrokes from hand motion. This, along with a language model, predicts what people intended to type despite ambiguous hand motion. Using this new method, typists averaged 73 words per minute with a 2.4% uncorrected error rate using their hands, a flat surface, and nothing else, achieving similar speed and accuracy to the same typist on a physical keyboard.

This surprising result led researchers to investigate why hand tracking was more effective than other physical methods, like tapping on a tablet. The team discovered hand tracking is uniquely good at isolating individual fingers and their trajectories as they reach for keys — information that is missing from capacitive sensing on tablets and smartphones today.

You can see in the video below a person with markers on their hands with external tracking cameras:

I asked a Facebook representative if Quest 2’s XR2 processor might enable any specific features for hand tracking, and if the marker-based research here might transfer to a markerless system as seen with Quest’s hand tracking.

“This marker-based hand tracking system is still purely in the research phase and not currently on our product roadmap — it’s one of several text input research methods FRL Research is exploring, including the EMG-based approach Michael Abrash showed last month at Facebook Connect,” a spokesperson wrote in an email. “For Quest 2, our current focus for text input is tracked keyboard support we introduced at Connect as part of Infinite Office.”

We found multiple text input methods on Oculus Quest 2 in the Oculus Browser prior to a launch software update. There was a mode that would let you tap out letters on your phone and send them to your Quest, as well as voice dictation. Facebook frequently rolls out experimental features for its standalone VR headsets as opt-in testing betas for a period before improving performance and finalizing the feature more broadly, with Oculus Link and Hand Tracking itself being the most significant examples. Right now, if you connect a Bluetooth keyboard and don’t use the tracked controllers, Quest gets confused about whether you are typing input on a physical keyboard or pinching to select a letter on a floating virtual keyboard with hand tracking. That’s likely to change with the forthcoming Infinite Office update and a tracked keyboard.

So while typing on any flat surface is just one of several methods being explored for text entry in AR and VR headsets by Facebook, the company did note the research was “a major step towards FRL Research’s goal of using hand-tracking to let you type 70 WPM on any surface.”

Microsoft’s ‘PIVOT’ Haptics Research Could Make Throwing A Ball In VR More Believable

Even if you were separated from family and friends by time or distance, what would it mean if virtual reality could believably offer you a game of catch with them anyway?

The latest VR haptics research from Microsoft — called “PIVOT” — might see this dream realized more believably.

The wrist-strapped accessory put together by Microsoft researchers features a piece that swings into the wearer’s palm for a believable catching and throwing experience in VR.

Haptic Pivot Microsoft Research

Today, most VR developers create their own throwing and grasping mechanics, and these can feel different from world to world or controller to controller. The haptic feedback provided by consumer VR hardware today, meanwhile, might be described as little more than buzzing. Still, the effect can be pretty satisfying for certain applications, like pulling back a bowstring or the slight tap of ball against a table tennis paddle. Microsoft’s researchers seem to be proposing a completely different level of believability with PIVOT.

The new research presented as part of the 2020 ACM Symposium on User Interface Software and Technology (UIST) is detailed in a paper called “Haptic PIVOT: On-Demand Handhelds in VR” co-authored by Robert Kovacs, Eyal Ofek, Mar Gonzalez Franco, Alexa Fay Siu, Sebastian Marwecki, Christian Holz, and Mike Sinclair. According to a blog post about the work, PIVOT is attached near the wrist and “we’re able to render the momentum and drag of thrown and caught objects, which are governed by Newton’s laws, including simulating speeds of objects upon reaching the hand: The robotized haptic handle deploys when needed, approaching and finally reaching the hand, creating the feeling of first contact—going from a bare hand to one holding an object—thus mimicking our natural interaction with physical objects in a way that traditional handheld controllers can’t.”

Check out the apple-picking demonstration in the video:

The paper linked above closes with the suggestion that these “results support PIVOT’s potential for future use.” Future work might look at reducing weight of the wearable or adding more motorized pieces to better line up the ball mechanism with the hand. Other sensors could be added as well, like cameras for finger tracking, that might enable more precise interactions.

Haptic Pivot Microsoft Research

We’ve seen other research with objects that stretch or transform in shape — could that be applied here to offer different types of in-hand objects which swing into position at just the right time? The paper details other “exploratory” prototypes they’d considered, including a design that featured a retrofitted Windows Mixed Reality controller’s handle that could swing into your grip, and another design with a 3DoF joystick.

Haptic Pivot Microsoft Research

If you wore a future PIVOT-based haptic device on your arm, then, you might be able to put on your baseball glove and catch a ball, release it from your glove and then catch it with your other hand to throw it back to someone who isn’t actually in the same place as you.

Microsoft’s ‘PIVOT’ Haptics Research Could Make Throwing A Ball In VR More Believable

Even if you were separated from family and friends by time or distance, what would it mean if virtual reality could believably offer you a game of catch with them anyway?

The latest VR haptics research from Microsoft — called “PIVOT” — might see this dream realized more believably.

The wrist-strapped accessory put together by Microsoft researchers features a piece that swings into the wearer’s palm for a believable catching and throwing experience in VR.

Haptic Pivot Microsoft Research

Today, most VR developers create their own throwing and grasping mechanics, and these can feel different from world to world or controller to controller. The haptic feedback provided by consumer VR hardware today, meanwhile, might be described as little more than buzzing. Still, the effect can be pretty satisfying for certain applications, like pulling back a bowstring or the slight tap of ball against a table tennis paddle. Microsoft’s researchers seem to be proposing a completely different level of believability with PIVOT.

The new research presented as part of the 2020 ACM Symposium on User Interface Software and Technology (UIST) is detailed in a paper called “Haptic PIVOT: On-Demand Handhelds in VR” co-authored by Robert Kovacs, Eyal Ofek, Mar Gonzalez Franco, Alexa Fay Siu, Sebastian Marwecki, Christian Holz, and Mike Sinclair. According to a blog post about the work, PIVOT is attached near the wrist and “we’re able to render the momentum and drag of thrown and caught objects, which are governed by Newton’s laws, including simulating speeds of objects upon reaching the hand: The robotized haptic handle deploys when needed, approaching and finally reaching the hand, creating the feeling of first contact—going from a bare hand to one holding an object—thus mimicking our natural interaction with physical objects in a way that traditional handheld controllers can’t.”

Check out the apple-picking demonstration in the video:

The paper linked above closes with the suggestion that these “results support PIVOT’s potential for future use.” Future work might look at reducing weight of the wearable or adding more motorized pieces to better line up the ball mechanism with the hand. Other sensors could be added as well, like cameras for finger tracking, that might enable more precise interactions.

Haptic Pivot Microsoft Research

We’ve seen other research with objects that stretch or transform in shape — could that be applied here to offer different types of in-hand objects which swing into position at just the right time? The paper details other “exploratory” prototypes they’d considered, including a design that featured a retrofitted Windows Mixed Reality controller’s handle that could swing into your grip, and another design with a 3DoF joystick.

Haptic Pivot Microsoft Research

If you wore a future PIVOT-based haptic device on your arm, then, you might be able to put on your baseball glove and catch a ball, release it from your glove and then catch it with your other hand to throw it back to someone who isn’t actually in the same place as you.

Google Sheets And Excel VR Spreadsheets Are A Thing Now

If your dreams occasionally involve spreadsheets that extend endlessly in all directions, good news: Researchers have developed a virtual reality spreadsheet interface that could expand Google Sheets and Microsoft Excel files from flat screens into 3D spaces. Rather than being nightmare-inducing, it could actually make spreadsheet apps more usable than before.

While traditional spreadsheets have been limited by the boundaries of 2D windows and displays, the research team envisions VR opening up adjacent 2D workspaces for related content, then using 3D for everything from floating menus to cell selection and repositioning. In one example, a VR headset mirrors the view of one spreadsheet page displayed on a physical tablet, while two virtual sheets sit to the left and right, permitting drag-and-drop access to their cells, as an overview hovers above.

Alternatively, tablet-surrounding areas could display useful reference materials, expanded views of formulas, or the full collection of a spreadsheet’s pages displayed as floating previews. Another possibility is a single spreadsheet page that stretches far further than the 30-degree diagonal field of view occupied by a typical tablet on a desk, utilizing more of the ~110-degree fields of view supported by VR headsets.

Fans of the film Minority Report and portrayals of similarly holographic future 3D interfaces will appreciate the team’s use of a floating pie menu — complete with a drop shadow on the spreadsheet — for selecting features and functions, as well as spherical rather than flat buttons and other visual elements that appear to leap off the flat pages. The 3D spreadsheet workspace could also be extended with floating desktop objects, such as a virtual trash can, to make disposing of unwanted content more intuitive.

Interestingly, the project’s research team includes members of the Mixed Reality Lab at Germany’s Coburg University, as well as a professor from the University of Cambridge and two principal researchers from Microsoft — mixed reality expert Eyal Ofek and UX engineer Michel Pahud. But their work isn’t limited to potential Microsoft applications: Dr. Jens Grubert, one of the paper’s authors, tells VentureBeat that the cross-organizational team includes “long time collaborators” and actually used Google Sheets rather than Excel for the backend.

In addition to an HTC Vive Pro VR headset and Microsoft Surface Pro 4 tablet, the researchers employed a spatially tracked stylus, enabling precision direct input for spreadsheet interactions while adding the freedom of in-air movement within a 3D space. Unsurprisingly, the VR spreadsheets can be used on existing commodity PC hardware, and the virtual UI was created with the Unity engine. Sheets pages are rendered within Chromium browser windows to match the resolution and size of the Surface Pro 4’s screen.

Full project details are available in the “Pen-based Interaction with Spreadsheets in Mobile Virtual Reality” research paper. If you’re interested in deeper dives, you can see a video of the project here, as well as a broader exploration of the team’s VR-tablet research here, ahead of their presentation at the IEEE’s International Symposium on Mixed and Augmented Reality, being held online from November 9 to 13.

This post by Jeremy Horwitz originally appeared on VentureBeat.

The post Google Sheets And Excel VR Spreadsheets Are A Thing Now appeared first on UploadVR.

Google Figured Out How To Stream 6DoF Video Over The Internet

Researchers from Google developed the first end-to-end 6DoF video system which can even stream over (high bandwidth) internet connections.

Current 360 videos can take you to exotic places and events, and you can look around, but you can’t actually move your head forward or backward positionally. This makes the entire world feel locked to your head, which really isn’t the same as being somewhere at all.

Google’s new system encapsulates the entire video stack; capture, reconstruction compression, and rendering- delivering a milestone result.

The camera rig features 46 synchronized 4K cameras running at 30 frames per second. Each camera is attached to a “low cost” acrylic dome. Since the acrylic is semi-transparent, it can even be used as a viewfinder.

Each camera used has a retail price of $160, which would total to just north of $7,000 for the rig. That may sound high, but it’s actually considerably lower cost than bespoke alternatives. 6DoF video is a new technology just starting to become viable.

The result is a 220 degree “lightfield” with a width of 70cm- that’s how much you can move your head. The resulting resolution is 10 pixels per degree, meaning it will probably look somewhat blurry on any modern headset with the exception of the original HTC Vive. As with all technology, that will improve over time.

But what’s really impressive is the compression and rendering. A light field video can be streamed over a reliable 300 Mbit/sec internet connection. That’s still well beyond average internet speeds, but most major cities now offer this kind of bandwidth.

How Does It Work?

In 2019 Google’s AI researchers developed a machine learning algorithm called DeepView. With an input of 4 images of the same scene, from slightly different perspectives, DeepView can generate a depth map and even generate new images from arbitrary perspectives.

This new 6DoF video system uses a modified version of DeepView. Instead of representing the scene through 2D planes, the algorithm instead uses a collection of spherical shells. A new algorithm reprocesses this output down to a much smaller number of shells.

Finally, these spherical layers are transformed into a much lighter “layered mesh”, which sample from a texture atlas to further save on resources (this is a technique used in game engines, where textures for different models are stored in the same file, tightly packed together.)

You can read the research paper and try out some samples in your browser on Google’s public page for the project.

Light field video is still an emerging technology in the early stages, so don’t expect YouTube to start supporting light field videos in the near future. But it does looks clear that one of the holy grails of VR content, streamable 6DoF video, is now a solvable problem.

We’ll be keeping a close eye on this technology as it starts to transition from research to real world products.

The post Google Figured Out How To Stream 6DoF Video Over The Internet appeared first on UploadVR.

Community Download: What’s The Most Exciting Future Use For VR To You?

Community Download is a weekly discussion-focused articles series published (usually) every Monday in which we pose a single, core question to you all, our readers, in the spirit of fostering discussion and debate. For today’s Community Download, we want to know what you’re most excited about for the future of VR?


Here at UploadVR we do our best to not only report on the current state of the VR industry as it relates to consumers, but also to report on what’s coming next. We need to keep an eye forward on the future, especially with a technology that’s still so early in its life with so many changes happening rapidly all the time.

One example is the rise of hand tracking and finger tracking thanks to the Valve Index controllers and the Oculus Quest embedded tracking cameras. People are using it to connect online and communicate with sign language, control their desktops without any controllers, mice, or keyboards, surgeons are using VR for training, and soon we’ll have realistic real-life avatars too. There’s a lot to look forward too, especially if you consider foveated rendering, standalone headsets becoming more of a thing, and so much more.

This leads us to the key discussion question of the week: What are you most excited about for the future of VR? Do you want a big generational leap forward in visual fidelity, are you hoping for a groundbreaking social VR app, or do you look forward to creative use cases more than anything?

Let us know down in the comments below!

The post Community Download: What’s The Most Exciting Future Use For VR To You? appeared first on UploadVR.

Community Download: What’s The Most Exciting Future Use For VR To You?

Community Download is a weekly discussion-focused articles series published (usually) every Monday in which we pose a single, core question to you all, our readers, in the spirit of fostering discussion and debate. For today’s Community Download, we want to know what you’re most excited about for the future of VR?


Here at UploadVR we do our best to not only report on the current state of the VR industry as it relates to consumers, but also to report on what’s coming next. We need to keep an eye forward on the future, especially with a technology that’s still so early in its life with so many changes happening rapidly all the time.

One example is the rise of hand tracking and finger tracking thanks to the Valve Index controllers and the Oculus Quest embedded tracking cameras. People are using it to connect online and communicate with sign language, control their desktops without any controllers, mice, or keyboards, surgeons are using VR for training, and soon we’ll have realistic real-life avatars too. There’s a lot to look forward too, especially if you consider foveated rendering, standalone headsets becoming more of a thing, and so much more.

This leads us to the key discussion question of the week: What are you most excited about for the future of VR? Do you want a big generational leap forward in visual fidelity, are you hoping for a groundbreaking social VR app, or do you look forward to creative use cases more than anything?

Let us know down in the comments below!

The post Community Download: What’s The Most Exciting Future Use For VR To You? appeared first on UploadVR.