I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi

The new ‘mixed-reality’ headset gave me a glimpse of the future – and I’m not sure it’s a future we should want

If you ever worry that technology might be getting a little too intelligent and robots are poised to take over the world, I have a quick and easy way to deflate those fears: call up a company and try to ask them a simple question. You will be put through to an automated voice system and spend the next 10 minutes yelling NO, I DIDN’T SAY THAT! WHAT DO YOU MEAN ‘YOU DIDN’T QUITE CATCH THAT?’ I DON’T WANT ANY OF THOSE OPTIONS! PUT ME THROUGH TO A HUMAN, GODDAMMIT!

That was certainly my experience calling up Apple and trying to reconfirm my Vision Pro demo, which had been abruptly cancelled due to snow. But if my phone experience felt ancient, the Apple Vision Pro headset itself felt like a startling glimpse of the future. As it should: the thing costs $3,499.

Arwa Mahdawi is a Guardian US columnist

Continue reading...

Apple Vision Pro reviews roundup: stunning potential with big trade-offs

Early reviews of cutting-edge headset suggest it is packed with sci-fi tech and interesting ideas but is far from perfect

The first reviews of Apple’s Vision Pro headset, from publications with early access to the company’s attempt to create the next computing platform, talk of a big leap forward for face-mounted computers, for better or worse.

The US-only headset, first announced in June last year, aims to move “spatial computing” beyond the limited mixed-reality offered by rivals from Meta, Microsoft and others. It is packed with cutting-edge technology including 3D cameras on the front to capture videos, the ability to blend the real and virtual worlds with hand and eye tracking, plus a display on the front that shows a simulacrum of the wearer’s eyes.

Continue reading...

Apple’s Vision Pro VR is incredible technology but is it useful?

The new product is far ahead of its competition; however, it is not clear that there is a pressing need for it or that most people can afford it

As people begin to report on their hands-on time with Apple’s Vision Pro VR headset, it’s becoming increasingly clear that the company has produced an incredible piece of hardware.

Even in limited demonstrations, users have praised the company’s extraordinary work producing the two postage-stamp-sized screens that sit in each eyepiece and pack in more pixels than a 4K TV; they’ve been stunned by the quality of the “passthrough” video, which shows wearers what’s happening in the outside world in enough detail that they can even use their phones while wearing the headset; and they’ve been impressed by the casual ease with which the gesture controls on the new hardware work, with an array of infrared cameras letting users make small and subtle hand movements to select and scroll rather than relying on bulky controllers.

Continue reading...

TechScape: Is Apple’s $3,500 Vision Pro more than just another tech toy for the rich?

There’s a disconnect between the eye-watering price of Apple’s new ‘spatial computing’ gadget and the promise of it – but it has some genuinely novel features

Don’t get TechScape delivered to your inbox? Sign up for the full article here

Yesterday, Apple finally confirmed the worst-kept secret in Silicon Valley, and announced the Vision Pro, its $3,499 virtual reality headset. From our story:

The headset allows users to interact with “apps and experiences”, the Apple vice-president of human interface design, Alan Dye, said, in an augmented reality (AR) version of their own surroundings or in a fully immersive virtual reality (VR) space.

“Apple Vision Pro relies solely on your eyes, hands and voice,” Dye said. “You browse the system simply by looking. App icons come to life when you look at them; simply tap your fingers together to select, and gently flick to scroll.”

EyeSight, which sounded so ridiculous, could actually … work? A curved, outward-facing OLED screen displays the wearer’s eyes to the outside world, giving the impression of the headset as a simple piece of translucent glass. The screen mists over if the wearer is in a fully immersive VR space, while allowing people to have (simulated, at least) eye contact when in AR mode.

An array of downward and outward-pointing IR cameras let the headset keep track of your position and gestures at all times, allowing the company to build a controller-free experience without requiring the wearer to hold their hands in their eye-line when using the headset.

An AI-powered “persona” (don’t call it an avatar) stands in for you when you make a video call using the Vision Pro. It’s a photorealistic attempt to animate a real picture of you, using the data the headset captures of your eye, mouth and hand movements while you talk. Even in the staged demos, it looked slightly uncanny, but it seems a far smaller hurdle to introduce into the world than trying to encourage people to have business meetings with their Memoji.

Should VR headsets have a bulky battery mounted on your head, or should they rely on a tethered cable to a separate PC? Apple thinks there’s a third option: slip the bulky battery in your back pocket, and run the cable up to a lighter, more comfortable set of goggles. It could work. Or it could be the worst of both worlds: a cable that still inhibits movement and comfort, with none of the power of a real tethered VR system. Hey, not all novelty is a slam-dunk.

Continue reading...

TechScape: Will Apple’s new VR headset be the one to finally catch on?

In this week’s newsletter: The $3,000 product could be the next Apple gamechanger – or just another cool toy for those who can afford it

Next Monday will see Apple’s worldwide developers conference kick off, and with it one of the company’s two most important annual press events.

Typically, the keynote at WWDC (or “dub dub”) is a software-focused affair, previewing the next versions of iOS, macOS and so on for an audience of developers who need to get to grips with the updates before their launch in the autumn. It’s balanced out by the hardware-focused events oriented around each year’s iPhone launch, since Apple still likes to play the game of announcing and shipping its top-tier products in short order.

A tethered battery pack, designed to sit in the user’s back pocket, to ease the tradeoff between power and performance on the one hand and weight and comfort on the other.

A screen on the front of the headset, designed solely to show the user’s expressions to the outside world, with the goal of making it more comfortable to interact with people wearing the device.

A focus on “passthrough” use, where a camera on the front of the screen shows the outside world to the wearer, with apps and features superimposed on top.

And, most importantly of all, a price tag of about $3,000.

Continue reading...

Cyberpunk + XR

Cyberpunk is science fiction subgenre in a dystopian futuristic setting that tends to focus on a “combination of lowlife and high tech”, featuring futuristic technological and scientific achievements, such as artificial intelligence and cybernetics, juxtaposed with societal collapse, dystopia or decay (per Wikipedia). What Cyberpunk also often features are advanced demonstrations and uses of XR technologies: Augmented Reality, Virtual Reality and Mixed Reality.

In previous blog posts, we’ve mentioned KBZ which has created lists of Augmented Reality and Virtual Reality films, Artificial Intelligence films, Hard Sci-Fi filmsMultiverse films and Technology films (often also featuring AR, VR and MR). KBZ has two new articles that look at various Cyberpunk films – the Top 20 Best Cyberpunk Films and the Top Cyberpunk Films You Haven’t Seen.

The Best Cyberpunk Films article is worth a read if you’re looking for some of the best AR, VR and MR films to watch. There’s quite a bit of crossover between Cyberpunk and XR tech and the article lists some of the best films like Ready Player One, Blade Runner 2049, Total Recall, The Matrix and others.

What we found more interesting was the Top Cyberpunk Films You Haven’t Seen article as it has some new and obscure AR & VR films we haven’t seen yet. There’s a recent film called Karmalink that has some advanced AR concepts, a film called Hardwired that features AR advertising and display concepts (via a brain implant) and Terminal Justice which features old school VR HMD’s, a virtual reality crime scene and AR eye implants for infrared and night time vision. There were also two additional films we would recommend every AR & VR fan check out: Virtual Nightmare and Natural City. Virtual Nightmare uses VR similar to The Matrix and Natural City is similar to Blade Runner and shows many advanced AR concepts integrated into a futuristic society.

KBZ also has a short video that highlights some of these films (with Cyberpunk AR, VR and MR concepts) and you can watch the video below.

The post Cyberpunk + XR appeared first on Zugara.

The next Tamagotchi? Meet Peridot, the AR pet app from the makers of Pokémon Go

While kids will love throwing themselves into caring for their new virtual pet, older players looking for a next-gen AR-led Pokémon Go may be disappointed

From the unlikely return of Gladiators to the resurgence of the layered blowout hairstyle beloved of Rachel from Friends, 90s nostalgia is in rude health. It was only a matter of time, then, until we witnessed the return of the era’s most baffling toy – the Tamagotchi.

Created by Akihiro Yokoi and Aki Maita in 1996, these keychain-sized gaming devices became an instant playground phenomenon, seeing millions of children neglect their real-life pets in favour of cleaning pixelated poop. Then, just as quickly as they arrived, these pocket playthings disappeared. While Nintendo channelled the Tamagotchi spirit into the hugely successful Nintendogs series, the rise of increasingly complex life sims, such as … well, The Sims, saw the pet and play genre die an untimely death – until now.

Continue reading...

AR, VR, AI, Multiverse & More

We’ve referenced the KBZ Films site before as they’ve posted some lists of lesser-known Augmented Reality & Virtual Reality films. Recently, KBZ has posted an article of the Best AR & VR Films that has some great films listed (and that we’ve highlighted in the past on our blog here). From the article’s Top 20 rankings are some great AR & VR films including Avalon (VR), Auggie (AR), Brainstorm (VR), Sleep Dealer (MR) and Anon (AR). It’s worth checking the article out if you’re new to the XR field and want to see a list of the Best AR & VR films. KBZ also has a list of every imaginable film about AR, VR & MR that you can find here.

The KBZ site also has some other interesting articles to check out including a list of some great films about Artificial Intelligence (AI) and another article that highlights some obscure films about the Multiverse and alternate realities. The AI list of films is especially relevant as AI technology is also usually found in AR & VR films. The film Auggie comes to mind as the AR projection through eyewear is based on AI models of the person’s subconscious. Below are videos of some of the AI and Multiverse films referenced in the articles.

Finally, if you’re into Sci-Fi, there’s also some other great articles from the site including a list of the best time travel films and best time loop films. We’ve always found those films interesting as more recent time travel films have also included aspects of XR technologies.

The post AR, VR, AI, Multiverse & More appeared first on Zugara.

Meta Shows New Progress on Key Tech for Making AR Genuinely Useful

Meta has introduced the Segment Anything Model, which aims to set a new bar for computer-vision-based ‘object segmentation’—the ability for computers to understand the difference between individual objects in an image or video. Segmentation will be key for making AR genuinely useful by enabling a comprehensive understanding of the world around the user.

Object segmentation is the process of identifying and separating objects in an image or video. With the help of AI, this process can be automated, making it possible to identify and isolate objects in real-time. This technology will be critical for creating a more useful AR experience by giving the system an awareness of various objects in the world around the user.

The Challenge

Imagine, for instance, that you’re wearing a pair of AR glasses and you’d like to have two floating virtual monitors on the left and right of your real monitor. Unless you’re going to manually tell the system where your real monitor is, it must be able to understand what a monitor looks like so that when it sees your monitor it can place the virtual monitors accordingly.

But monitors come in all shapes, sizes, and colors. Sometimes reflections or occluded objects make it even harder for a computer-vision system to recognize.

Having a fast and reliable segmentation system that can identify each object in the room around you (like your monitor) will be key to unlocking tons of AR use-cases so the tech can be genuinely useful.

Computer-vision based object segmentation has been an ongoing area of research for many years now, but one of the key issues is that in order to help computers understand what they’re looking at, you need to train an AI model by giving it lots images to learn from.

Such models can be quite effective at identifying the objects they were trained on, but if they will struggle on objects they haven’t seen before. That means that one of the biggest challenges for object segmentation is simply having a large enough set of images for the systems to learn from, but collecting those images and annotating them in a way that makes them useful for training is no small task.

SAM I Am

Meta recently published work on a new project called the Segment Anything Model (SAM). It’s both a segmentation model and a massive set of training images the company is releasing for others to build upon.

The project aims to reduce the need for task-specific modeling expertise. SAM is a general segmentation model that can identify any object in any image or video, even for objects and image types that it didn’t see during training.

SAM allows for both automatic and interactive segmentation, allowing it to identify individual objects in a scene with simple inputs from the user. SAM can be ‘prompted’ with clicks, boxes, and other prompts, giving users control over what the system is attempting to identifying at any given moment.

It’s easy to see how this point-based prompting could work great if coupled with eye-tracking on an AR headset. In fact that’s exactly one of the use-cases that Meta has demonstrated with the system:

Here’s another example of SAM being used on first-person video captured by Meta’s Project Aria glasses:

You can try SAM for yourself in your browser right now.

How SAM Knows So Much

Part of SAM’s impressive abilities come from its training data which contains a massive 10 million images and 1 billion identified object shapes.  It’s far more comprehensive than contemporary datasets, according to Meta, giving SAM much more experience in the learning process and enabling it to segment a broad range of objects.

Image courtesy Meta

Meta calls the SAM dataset SA-1B, and the company is releasing the entire set for other researchers to build upon.

Meta hopes this work on promptable segmentation, and the release of this massive training dataset, will accelerate research into image and video understanding. The company expects the SAM model can be used as a component in larger systems, enabling versatile applications in areas like AR, content creation, scientific domains, and general AI systems.

Qualcomm Partners with 7 Major Telecoms to Advance Smartphone-tethered AR Glasses

Qualcomm announced at Mobile World Congress (MWC) today it’s partnering with seven global telecommunication companies in preparation for the next generation of AR glasses which are set to work directly with the user’s smartphone.

Partners include CMCC, Deutsche Telekom, KDDI Corporation, NTT QONOQ, T-Mobile, Telefonica, and Vodafone, which are said to currently be working with Qualcomm on new XR devices, experiences, and developer initiatives, including Qualcomm’s Snapdragon Spaces XR developer platform.

Qualcomm announced Snapdragon Spaces in late 2021, a software tool kit which focuses on performance and low power devices which allows developers to create head-worn AR experiences from the ground-up or by adding head-worn AR to existing smartphone apps.

Qualcomm and Japan’s KDDI Corporation also announced a multi-year collaboration which it says will focus on the expansion of XR use cases and creation of a developer program in Japan.

Meanwhile, Qualcomm says OEMs are designing “a new wave of devices for operators and beyond” such as the newly unveiled Xiaomi Wireless AR Glass Discovery Edition, OPPO’s new Mixed Reality device and OnePlus 11 5G smartphone.

At least in Xiaomi’s case, its Wireless AR Glass headset streams data from compatible smartphones. Effectively offloading computation to the smartphone, the company’s 126g headset boasts a wireless latency of as low as 3ms between the smartphone device to the glasses, and a wireless connection with full link latency as low as 50ms which is comparable to wired solution.