A report from the Financial Timesmaintains Meta is currently in talks with AR headset creator Magic Leap to strike a multiyear deal, which could include intellectual property licensing and contract manufacturing of AR headsets in North America.
The AR unicorn is said to possess valuable IP regarding custom components, including its optics, waveguides, and software.
It’s said a potential deal may also allow Meta to lessen its reliance on China for component manufacturing. In 2019, Magic Leap partnered with manufacturing solutions company Jabil to create a Guadalajara, Mexico plant which the report maintains can assemble headsets in “the tens of thousands a year.”
Citing people with knowledge of the talks, the report maintains however a specific joint Meta-Magic Leap headset isn’t expected.
While both companies didn’t comment on a potential partnership, Magic Leap said this to the Financial Times:
“Given the complexities of developing true augmented reality technologies and the intricacies involved with manufacturing these optics, as well as the issues many companies experience with overseas supply chain dependencies, we have entered into several non-exclusive IP licensing and manufacturing partnerships with companies looking to enter the AR market or expand their current position.”
Since it exited stealth in 2014, Magic Leap has released two AR headsets, Magic Leap 1 and Magic Leap 2, which have been compared in functionality to Microsoft’s HoloLens AR headsets.
The company has raised over $4 billion, with minority investors including Google, Alibaba, Qualcomm, AT&T, and Axel Springer. Its majority stakeholder is Saudi Arabia’s state-owned sovereign wealth fund.
In addition to Quest Pro mixed reality headset, Meta has confirmed it’s currently working on its next iteration of Quest, likely Quest 3, as well as its own AR glasses. Meta started real-world testing of Project Aria in 2020, a platform for training its AR perception systems and asses public perception of the technology.
Eye-tracking—the ability to quickly and precisely measure the direction a user is looking while inside of a VR headset—is often talked about within the context of foveated rendering, and how it could reduce the performance requirements of XR headsets. And while foveated rendering is an exciting use-case for eye-tracking in AR and VR headsets, eye-tracking stands to bring much more to the table.
Updated – May 2nd, 2023
Eye-tracking has been talked about with regards to XR as a distant technology for many years, but the hardware is finally becoming increasingly available to developers and customers. PSVR 2 and Quest Pro are the most visible examples of headsets with built-in eye-tracking, along with the likes of Varjo Aero, Vive Pro Eye and more.
With this momentum, in just a few years we could see eye-tracking become a standard part of consumer XR headsets. When that happens, there’s a wide range of features the tech can enable to drastically improve the experience.
Let’s first start with the one that many people are already familiar with. Foveated rendering aims to reduce the computational power required for displaying demanding AR and VR scenes. The name comes from the ‘fovea’—a small pit at the center of the human retina which is densely packed with photoreceptors. It’s the fovea which gives us high resolution vision at the center of our field of view; meanwhile our peripheral vision is actually very poor at picking up detail and color, and is better tuned for spotting motion and contrast than seeing detail. You can think of it like a camera which has a large sensor with just a few megapixels, and another smaller sensor in the middle with lots of megapixels.
The region of your vision in which you can see in high detail is actually much smaller than most think—just a few degrees across the center of your view. The difference in resolving power between the fovea and the rest of the retina is so drastic, that without your fovea, you couldn’t make out the text on this page. You can see this easily for yourself: if you keep your eyes focused on this word and try to read just two sentences below, you’ll find it’s almost impossible to make out what the words say, even though you can see something resembling words. The reason that people overestimate the foveal region of their vision seems to be because the brain does a lot of unconscious interpretation and prediction to build a model of how we believe the world to be.
Foveated rendering aims to exploit this quirk of our vision by rendering the virtual scene in high resolution only in the region that the fovea sees, and then drastically cut down the complexity of the scene in our peripheral vision where the detail can’t be resolved anyway. Doing so allows us to focus most of the processing power where it contributes most to detail, while saving processing resources elsewhere. That may not sound like a huge deal, but as the display resolution of XR headsets and field-of-view increases, the power needed to render complex scenes grows quickly.
Eye-tracking of course comes into play because we need to know where the center of the user’s gaze is at all times quickly and with high precision in order to pull off foveated rendering. While it’s difficult to pull this off without the user noticing, it’s possible and has been demonstrated quite effectively on recent headset like Quest Pro and PSVR 2.
Automatic User Detection & Adjustment
In addition to detecting movement, eye-tracking can also be used as a biometric identifier. That makes eye-tracking a great candidate for multiple user profiles across a single headset—when I put on the headset, the system can instantly identify me as a unique user and call up my customized environment, content library, game progress, and settings. When a friend puts on the headset, the system can load their preferences and saved data.
Eye-tracking can also be used to precisely measure IPD (the distance between one’s eyes). Knowing your IPD is important in XR because it’s required to move the lenses and displays into the optimal position for both comfort and visual quality. Unfortunately many people understandably don’t know what their IPD off the top of their head.
With eye-tracking, it would be easy to instantly measure each user’s IPD and then have the headset’s software assist the user in adjusting headset’s IPD to match, or warn users that their IPD is outside the range supported by the headset.
In more advanced headsets, this process can be invisible and automatic—IPD can be measured invisibly, and the headset can have a motorized IPD adjustment that automatically moves the lenses into the correct position without the user needing to be aware of any of it, like on the Varjo Aero, for example.
The optical systems used in today’s VR headsets work pretty well but they’re actually rather simple and don’t support an important function of human vision: dynamic focus. This is because the display in XR headsets is always the same distance from our eyes, even when the stereoscopic depth suggests otherwise. This leads to an issue called vergence-accommodation conflict. If you want to learn a bit more in depth, check out our primer below:
In the real world, to focus on a near object the lens of your eye bends to make the light from the object hit the right spot on your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.
Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate inward to bring each eye’s perspective into alignment. You can see this too with our little finger trick as above: this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then focus on those objects behind your finger, now you see a double finger image.
With precise enough instruments, you could use either vergence or accommodation to know how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time—there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, anytime you look at anything.
But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.
In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, and the lens’ shape is static, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).
That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy.
But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.
Varifocal displays—those which can dynamically alter their focal depth—are proposed as a solution to this problem. There’s a number of approaches to varifocal displays, perhaps the most simple of which is an optical system where the display is physically moved back and forth from the lens in order to change focal depth on the fly.
Achieving such an actuated varifocal display requires eye-tracking because the system needs to know precisely where in the scene the user is looking. By tracing a path into the virtual scene from each of the user’s eyes, the system can find the point that those paths intersect, establishing the proper focal plane that the user is looking at. This information is then sent to the display to adjust accordingly, setting the focal depth to match the virtual distance from the user’s eye to the object.
A well implemented varifocal display could not only eliminate the vergence-accommodation conflict, but also allow users to focus on virtual objects much nearer to them than in existing headsets.
And well before we’re putting varifocal displays into XR headsets, eye-tracking could be used for simulated depth-of-field, which could approximate the blurring of objects outside of the focal plane of the user’s eyes.
As of now, there’s no major headset on the market with varifocal capabilities, but there’s a growing body of research and development trying to figure out how to make the capability compact, reliable, and affordable.
While foveated rendering aims to better distribute rendering power between the part of our vision where we can see sharply and our low-detail peripheral vision, something similar can be achieved for the actual pixel count.
Rather than just changing the detail of the rendering on certain parts of the display vs. others, foveated displays are those which are physically moved (or in some cases “steered”) to stay in front of the user’s gaze no matter where they look.
Foveated displays open the door to achieving much higher resolution in AR and VR headsets without brute-forcing the problem by trying to cram pixels at higher resolution across our entire field-of-view. Doing so is not only be costly, but also runs into challenging power and size constraints as the number of pixels approach retinal-resolution. Instead, foveated displays would move a smaller, pixel-dense display to wherever the user is looking based on eye-tracking data. This approach could even lead to higher fields-of-view than could otherwise be achieved with a single flat display.
Varjo is one company working on a foveated display system. They use a typical display that covers a wide field of view (but isn’t very pixel dense), and then superimpose a microdisplay that’s much more pixel dense on top of it. The combination of the two means the user gets both a wide field of view for their peripheral vision, and a region of very high resolution for their foveal vision.
Granted, this foveated display is still static (the high resolution area stays in the middle of the display) rather than dynamic, but the company has considered a number of methods for moving the display to ensure the high resolution area is always at the center of your gaze.
OpenXR is an open standard that aims to standardize the development of VR and AR applications, making hardware and software more interoperable. The standard has been in development since 2017 and is backed by virtually every major hardware, platform, and engine company in the XR industry.
“The adoption of OpenXR as a common AR ecosystem standard ensures the continual growth and maturation of AR,” Magic Leap said in its announcement. “Magic Leap will continue to advance this vision as Vice Chair of the OpenXR Working Group. In this role, Magic Leap provides technical expertise and collaborates with other members to address the needs of developers and end-users, the scope of the standard, and best practices for implementation.”
Its true that Magic Leap has been part of the OpenXR Working Group—a consortium responsible for developing the standard—for a long time, but we can’t help but feel like Apple’s heavily rumored entrance into the XR space lit a bit of a fire under the feet of the company to get the work across the finish line.
In doing so, Magic Leap has strengthened itself—and the existing XR industry—against what could be a standards upheaval by Apple.
Apple is well known for ignoring certain widely adopted computing standards and choosing to use their own proprietary technologies, in some cases causing a technical divide between platforms. You very well may have experienced this yourself, have you ever found yourself in a conversation about ‘blue bubbles and green bubbles’ when it comes to texting.
With an industry as young as XR—and with Apple being so secretive about its R&D in the space—there’s a good chance the company will have its own way of doing things, especially when it comes to how developers and their applications are allowed to interact with the headset.
If Apple doesn’t want to support OpenXR, this is likely the biggest risk for the industry; if developers have to change their development processes for Apple’s headset, that would create a divide between Apple and the rest of the industry, making applications less portable between platforms.
And while OpenXR-supporting incumbents have the upper hand for the time being (because they have all the existing XR developers and content on their side), one would be foolish to forget the army of experienced iOS developers that are used to doing things the ‘Apple way’. If those developers start their XR journey with Apple’s tools, it will be less likely that their applications will come to OpenXR headsets.
On the other hand, it’s possible that Apple will embrace OpenXR because it sees the value that has already come from years of ironing out the standard—and the content that already supports it. Apple could even be secretly part of the OpenXR Working Group, as companies aren’t forced to make their involvement known.
In the end it’s very likely that Apple will have its own way of doing things in XR, but whether that manifests more in the content running on the headset or down at the technical level, remains to be seen.
Magic Leap 2 isn’t available just yet, but when it hits the market later this year it will be directly competing with Microsoft’s HoloLens 2. Though Magic Leap 2 beats out its rival in several meaningful places, its underlying design still leaves HoloLens 2 with some advantages.
Magic Leap as a company has had a wild ride since its founding way back in 2010, with billions of dollars raised, an ambitious initial product that fell short of the hype, and a near-death and rebirth with a new CEO.
The company’s latest product, Magic Leap 2, in many ways reflects the ‘new’ Magic Leap. It’s positioned clearly as an enterprise product, aims to support more open development, and it isn’t trying to hype itself as a revolution. Hell—Magic Leap is even (sensibly) calling it an “AR headset” this time around instead of trying to invent its own vocabulary for the sake of differentiation.
After trying the headset at AWE 2022 last week, I got the sense that, like the company itself, Magic Leap 2 feels like a more mature version of what came before—and it’s not just the sleeker look.
Magic Leap 2 Hands-on
The most immediately obvious improvement to Magic Leap 2 is in the field-of-view, which is increased from 50° to 70° diagonally. At 70°, Magic Leap 2 feels like it’s just starting to scratch that ‘immersive’ itch, as you have more room to see the augmented content around you which means less time spent ‘searching’ for it when it’s out of your field-of-view.
While I suspect many first-time Magic Leap 2 users will come away with a ‘wow the field-of-view is so good!’ reaction… it’s important to remember that the design of ML2 (like its predecessor), ‘cheats’ a bit when it comes to field-of-view. Like the original, the design blocks a significant amount of your real-world peripheral vision (intentionally, as far as I can tell), which makes the field-of-view appear larger than it actually is by comparison.
This isn’t necessarily a bad thing if only the augmented content is your main focus (I mean, VR headsets have done this pretty much since day one), but it’s a questionable design choice for a headset that’s designed to integrate your real-world and the augmented world. Thus real-world peripheral vision remains a unique advantage that HoloLens 2 holds over both ML1 and ML2… but more on that later.
Unlike some other AR headsets, Magic Leap 2 (like its predecessor) has a fairly soft edge around the field-of-view. Instead of a hard line separating the augmented world from the real-world, it seems to gently fade away, which makes it less jarring when things go off-screen.
Another bonus to immersion compared to other devices is the headset’s new dimming capability which can dynamically dim the lenses to reduce incoming ambient light in order to make the augmented content appear more solid. Unfortunately this was part of the headset that I didn’t have time to really put through its paces in my demo as the company was more focused on showing me specific content. Another thing I didn’t get to properly compare is resolution. Both are my top priority for next time.
Tracking remains as good as ever with ML2, and on-par with HoloLens 2. Content feels perfectly locked to the environment as you move your head around. I did see some notable blurring, mostly during positional head movement specifically. ML1 had a similar issue and it has likely carried over as part of the headset’s underlying display technology. In any case it seems mostly hidden during ‘standing in one spot’ use-cases, and impacts text legibility more than anything else.
And while the color-consistency issue across the image is more subtle (the ‘rainbow’ look), it’s still fairly obvious. It didn’t appear to be as bad as ML1 or HoloLens 2, but it’s still there which is unfortunate. It doesn’t really impact the potential use-cases of the headset, but it does bring a slight reduction to the immersiveness of the image.
While ML2 has been improved almost across the board, there’s one place where it actually takes a step back… and it was one of ML1’s most hyped features: the mystical “photonic lightfield chip” (AKA a display with two focal planes)—is no longer. Though ML2 does have eye-tracking (likely improved thanks to doubling the number of cameras), it only supports a single focal plane (as is the case for pretty much all AR headsets available today).
It looks like Magic Leap is holding a barn burner of a sale on its first AR headset, Magic Leap 1, as the one-time $2,300 device can now be had for $550.
As first reported by GMW3, Magic Leap appears to be flushing excess stock of the 2018-era AR headset via the Amazon-owned online retailer Woot.
The listing (find it here) is for a brand new Magic Leap 1, including the headset’s hip-worn compute unit and single controller. The sale is happening from now until June 1st, and features a three-unit limit per customer. Amazon US Prime members qualify for free shipping, which ought to arrive to those of you in the lower 48 in early June.
If you’re tempted, there’s a few things you should know before hitting the ‘buy now’ button. Users should be warned that since Magic Leap pivoted to service only enterprise users, that its Magic Leap World online app store isn’t likely to see any new apps outside of the handful that were released between 2018-2020.
Still, there are a mix of apps such as Spotify or room-scale shooter Dr. Grordbort’s Invaders which might be better suited as tech demos, giving prospective augmented reality devs a sense of what you might create for a bona fide AR headset, ostensibly in preparation for what devices may come—we’re looking at Apple, Google, and Meta in the near future for mixed reality headsets capable of both VR and passthrough AR.
Launched in 2018, Magic Leap straddled an uneasy rift between enterprise and prosumers with ML 1 (known then as ‘ML One’). Reception by consumers for its $2,300 AR headset was lukewarm, and messaging didn’t seem focused enough to give either developers or consumers hope that a more accessible bit of ML hardware was yet to come. Then in mid 2020, company founder and CEO Rony Abovitz stepped down, giving way to former Microsoft exec Peggy Johnson to take the reigns, who has thus far positioned the company to solely target enterprise with its latest Magic Leap 2 headset.
Here’s the full spec sheet below:
CPU & GPU
NVIDIA® Parker SOC
CPU: 2 Denver 2.0 64-bit cores + 4 ARM Cortex A57 64-bit cores (2 A57’s and 1 Denver accessible to applications)
GPU: NVIDIA Pascal, 256 CUDA cores
Graphic APIs: OpenGL 4.5, Vulkan, OpenGL ES 3.1+AEP
RAM: 8 GB
Storage Capacity: 128 GB (actual available storage capacity 95 GB)
Built-in rechargeable lithium-ion battery. Up to 3.5 hours continuous use. Battery life can vary based on use cases. Power level will be sustained when connected to an AC outlet. 45-watt USB-C Power Delivery (PD) charger
Voice (speech to text) + real world audio (ambient)
Onboard speakers and 3.5mm jack with audio spatialization processing
Bluetooth 4.2, WiFi 802.11ac/b/g/n, USB-C
Haptics: LRA Haptic Device
Tracking: 6DoF (position and orientation)
Touchpad: Touch sensitive
LEDs: 12-LED (RGB) ring with diffuser
Power: Built-in rechargeable lithium-ion battery. Up to 7.5 hours continuous use. 15-watt USB-C charger
8-bit resolution Trigger Button
Digital Bumper Button
Digital Home Button
Back before it ever had a product, the very well backed Magic Leap was the talk of the XR town thanks to its secrecy, occasional celeb tech demos and plenty of outlandish spin. All of that eventually produced the Magic Leap One which didn’t exactly set the world on fire, especially as the device cost in excess of $2000 USD when it launched in 2018. If you wanted one but couldn’t afford it then now’s the chance, Magic Leap seems to be selling them off cheap.
There’s a listing on Amazon-owned marketplace Woot for the first generation Magic Leap 1 – which was a slight improvement over the original Magic Leap One Creators Edition. It seems as though Magic Leap is selling off its old stock as the augmented reality (AR) headset still comes with a 1-year warranty and you can buy up to three at once!
But it’s the price that’s most surprising, you can pick up a brand new Magic Leap 1 for only $549 USD, that’s a massive 76% saving off the listed $2,295.00. That’s the biggest saving gmw3 has seen on hardware, even if it has been superseded by the newer Magic Leap 2.
Magic Leap 1 might have been a more enterprise-oriented headset – it wasn’t until a little later that Magic Leap announced it would fully focus on enterprise – but at the time it did court developers from across the XR industry. Studios like Resolution Games created exclusive titles like Glimt: The Vanishing at the Grand Starlight Hotel although, for the most part, those looking to tinker in AR will get the most use out of this deal.
The Magic Leap 1 is comprised of the headset and its array of sensors, an external puck that houses the battery and CPU, plus an additional remote control. The holographic display has a field of view (FoV) of 50-degrees and there’s full 6DoF tracking support. Other features include a 120Hz refresh rate, 8GB RAM, 128GB storage, and 3.5 hours of battery life.
The $549 Magic Leap 1 deal will end in 8 days or sooner if the stock does run out before then. For continued updates on the latest XR deals, keep reading gmw3.
We had previously seen photos of Magic Leap 2, but this new video gives a full 360 degree overview. Plus, it gives a clearer look at the headset’s accompanying controllers. As reported earlier this month, the controllers feature cameras on the sides, used for onboard inside-out tracking.
We had seen some unofficial pictures of the controllers at the time, but this new video gives us our first official look. The two cameras are present on the sides, but you can also see what looks to be a trackpad on the top of the controller.
Controllers in almost all AR and VR systems available today are either tracked by the headset or rely on external base stations. Meta’s Quest 2 for example tracks a pattern of infrared LEDs underneath the plastic ring of its controllers, while Valve’s Index controllers determine their position relative to SteamVR “Lighthouse” base stations placed in the corner of your room.
Relying on the headset for tracking has a flaw: if the controller moves out of view of the sensors or if any part of your body gets in the way, tracking will temporarily break. This isn’t a problem for many use cases, but does limit intricate two handed interactions and scenarios like looking left while shooting right. Using external base stations can alleviate most of these issues, but that increases setup time and severely limits portability – and the path from controllers to base stations can still be occluded.
Magic Leap 1 and Pico Neo 2 used magnetic tracking. Unlike visible light, the magnetic field can pass through the human body so occlusion isn’t an issue. But magnetic tracking isn’t as precise as optical tracking systems can be, and adds significant weight and cost to the hardware.
Controllers with onboard cameras promise to solve the occlusion problem while maintaining high precision by tracking themselves in the same way inside-out headsets do – using a type of algorithm called Simultaneous Location And Mapping (SLAM). SLAM essentially works by comparing the acceleration (from an accelerometer) and rotation (from a gyroscope) to how high contrast features in your room are moving relative to the cameras. Initial SLAM algorithms were hand crafted, but most today use at least some machine learning.
The potential downsides of this approach are the cost of a chip powerful enough to run the tracking algorithm, the reduction in battery life due to the power that chip would draw, and the need to have a well-lit environment with high contrast features such as posters – though that limitation applies to inside-out headsets too. Some have suggested tracking quality may be reduced in fast movements due to motion blur, but this shouldn’t be any more of an issue than tracking fast moving LEDs – a global shutter sensor with a low exposure time should make this a non-issue.
Both Magic Leap 2 and Project Cambria are slated to release this year, though neither has a specific release window. They’re very different products – ML2 is a transparent AR headset designed for enterprise while Cambria is an opaque headset for VR and mixed reality – but whichever launches first will be the first AR or VR system to use this new approach to controller tracking.
A new photo of Magic Leap 2 appears to show the device’s controller equipped with cameras for inside-out tracking which would be the first time we’ve seen the approach employed in a commercial XR headset.
A recent photo of Magic Leap 2 posted by Peter H. Diamandis is, as far as we know, the first time we’ve gotten a clear look at the front of the Magic Leap 2 controller. The photo clearly shows what appear to be two camera sensors on the controller, indicating a high likelihood it will have on-board inside-out tracking.
The original Magic Leap 1 also had a motion controller, but it used magnetic tracking. This was the reason for the curious square sticking out of the headset’s right side (it contained a receiver that sensed the magnetic field emitted from the controller).
Magic Leap 2 has ditched the square and appears to be moving to entirely on-board inside-out tracking for its controller. To our knowledge this will be the first time a commercial XR headset makes use of the approach; that is, assuming Magic Leap 2 beats Meta’s Project Cambria to market (the latter is also expected to use on-board inside-out tracking for its controllers, based on some leaked details, though it hasn’t been confirmed yet).
Other standalone XR headsets, like Quest 2, typically use headset-based inside-out tracking to track their controllers (or the user’s hands). That is: cameras on the headset look for the controllers (typically arrayed with IR LEDs) and use their location to map their movement relative to the headset.
While this approach has proven effective, it only works when the controllers are in view of the headset’s cameras. That means it’s possible for the headset to lose track of the controllers if they spend too much time outside of the camera’s view (like if you held them too low, too high, or behind your back). Putting cameras on the controllers themselves would enable inside-out tracking that, in theory, has coverage no matter where the user holds it.
Beyond ‘anywhere’ tracking coverage, putting inside-out tracking directly on the controller means the headset wouldn’t need to be covered in so many cameras (Quest 2 uses four cameras while Rift S tops that at five), and it means the elimination of the ‘tracking rings’ (seen on most standalone headset controllers) making them sleeker and perhaps less prone to breakage.
There’s more potential benefits too. Giving controllers their own inside-out tracking could make it easier to use a combination of input methods, like a controller in one hand and controllerless hand-tracking in the other. Currently that’s a challenge because the cameras on the headsets tend to use different exposure settings for tracking controllers compared to tracking hands. Furthermore, decoupling controller tracking from the headset has the potential to reduce controller tracking latency.
But, as they say, there is no free lunch. Giving controllers on-board inside out tracking likely means putting a dedicated processor inside with enough power to crunch the incoming images and compute a position, which we’d expect to result in higher costs and more battery drain compared to having simple IR LEDs on-board.
As you can imagine, swinging a camera around that fast is likely to induce motion blur across the image sensor, especially in darker scenes where the exposure time needs to be increased to gather more light.
Of course, Magic Leap 2 isn’t intended for gamers, so it’s possible that this wouldn’t be an issue for the headset’s enterprise-focused use-cases. Or maybe the image sensors on the controllers are fast enough to avoid motion blur even during quick movements?
Perhaps most likely the sensors aren’t above average, but the controller will simply lean primarily on IMU-based tracking until the controller slows down enough to get a proper position (this is the same approach that existing standalone headsets use to deal with controllers that occasionally leave the headset’s field-of-view).
In any case, we’ll know more as Magic Leap 2 approaches commercial availability. The headset is expected to launch this year but a specific release date hasn’t been announced yet. In the tweet which included the controller photo, Peter H. Diamandis said he would be hosting Magic Leap CEO Peggy Johnson at his Abundance 360 event at the end of April which could signal the next time the company will divulge official details on the headset.
Editor’s note: Before someone in the comments says ‘SteamVR Tracking tracking controllers have been doing inside-out tracking for years!’ the industry vernacular does not consider systems using artificial markers as ‘inside-out tracking’.
At Photonics West, Magic Leap’s VP of Optical Engineering Kevin Curtis revealed some key specs for Magic Leap 2 (ML2).
The headset apparently weighs 248 grams, down from Magic Leap 1’s 316 grams.
However, Magic Leap headsets use a tethered compute box attached to your waist rather than housing the battery & processor in the headset itself. Curtis says ML2′ new compute box is more than twice as powerful as ML1’s with “more memory and storage” too. While ML1 used an NVIDIA Tegra chip, Magic Leap announced a partnership with AMD in December.
ML1 has two variants to accommodate narrower and wider interpupillary distances (IPDs). Curtis claims ML2’s eyebox is twice as large meaning this is no longer necessary. The eyebox refers to the horizontal and vertical distance from the center of the lens your eyes can be and still get an acceptable image.
While ML1 uniquely had two focal planes so near and far virtual objects were focused at different distances, there was no mention of the same technology in the ML2 spec presentation.
ML2 seems to have its own unique optical technology though; a new feature called Dynamic Dimming. A major problem with see-through AR headsets is the inability to display the color black, since their optical systems are additive – they superimpose color onto a transparent lens, but black is the absence of color. Curtis claims dynamic dimming can vary the lens from letting through 22% of real world light to letting through just 0.3%. At 22% the real world will be visible even in dark rooms, 0.3% would let virtual objects remain visible even in bright outdoor conditions.
ML1 had one eye tracking camera per eye, but ML2 has two per eye, which Curtis says “improves image quality, minimizes render errors, and enables “segmented dimming”. The later use case wasn’t elaborated on, but may suggest the headset could vary the Dynamic Dimming level based on whether you’re looking at darker or lighter virtual objects.
Notably, Curtis did not reveal the resolution or the exact field of view. But CEO Peggy Johnson revealed it in November at Web Summit as approximately 70 degrees diagonal, up from 50 degrees in the original.
If we assume the aspect ratio shared in the October tease is accurate, that would mean a horizontal field of view of roughly 45 degrees and vertical field of view of roughly 55 degrees. This is significantly narrower than opaque passthrough headsets like LYNX R1, but much taller than competing seethrough headsets like HoloLens 2.
Magic Leap 1 is targeted toward enterprise but still available to individuals who want one. It’s unclear what sales path Magic Leap 2 will take, and no price or specific release date has yet been revealed.