Apple Reveals USDZ – A New AR File Format Made With Pixar, Adobe Bringing Support To Creative Cloud

iOS 12 has made its official debut at Apple’s WWDC 2018 event and has brought quite a lot of new toys with it. And the team kicked off discussions by focusing on how the tech conglomerate is continuing its focus on augmented technology.  It was last year at the same time, almost to the day, that Apple revealed its AR developer platform ARKit and now as then Apple’s Craig Federighi took to the stage to talk attendees through not just the rumoured ARKit 2, as reported earlier on VRFocus, but other developments.

WWDC 18The first announcement was a brand-new file format specifically for working with AR. Something developed companies such as Pixar – “some of the greatest minds in 3D.”

“AR is transformational technology.” Federighi told the audience at San Jose’s McEnery Convention Center. “Bringing experiences into the real world? It enables all kinds of new experiences, changing the way we have fun and the way we work. In iOS12 we wanted to make an easy way to experience AR across the system.”

The new file format is called USDZ (or Universal Scene Description), which has a focus on sharing content and will be able to be used or viewed in everything from internal file views, to the web browser Safari to email. Enabling you to place 3D models into the real world.  “It’s something like ‘AR quick-look’”, Federighi explained.

USDZ - Universal Scene DescriptionApple confirmed that they would be working with leading companies in 3D tools and libraries to bring USDZ support to their services. With Allegorithmic (developer of Substance), PTC, Turbosquid, Adobe, Autodesk, Sketchfab and Quixel all namechecked on stage.

“At Adobe we believe that augmented reality is an incredibly important technology. And with ARKit, Apple is by far the most powerful platform for AR.” Added Adobe‘s Executive Vice President and CTO Abhay Parasnis who appeared on stage to explain more about the company’s work on USDZ, something he described as “a pretty big deal” and confirming that USDZ support would be coming to Adobe’s Creative Cloud set of applications and services.

“With Creative Cloud designers and developers will now be able to use familiar apps – apps that they know and love, like Photoshop or Dimension – to create amazing AR content, and bring it easily via USDZ.”

Parasnis also confirmed a new Adobe creative app for iOS, specifically for designing AR-related content that will enable developers to bring in anything from text to images and video on Creative Cloud directly into a WYSIWIG AR editing environment.

VRFocus will bring you more news on the AR developments on WWDC shortly.

Apple Could Unveil ARKit 2.0 at WWDC With a Focus on Multiplayer

The augmented reality (AR) war between Apple’s ARKit and Google’s ARCore certainly seems to be hotting up if rumours about today’s WWDC 2018 conference are correct. With Google holding its I/O conference last month, this week it’s the turn of Apple, with not only iOS 12 expected to be announced but also a new version of ARKit.

Apple Visitor Centre AR

As reported by Reuters, Apple looks set to add official cross smartphone gameplay functionality into what will be ARKit 2.0. The basic premise behind it is that two people will be able to view the same digital object, enabling them to interact on their own devices yet sharing the game world. Google has already come up with its version, Cloud Anchors, for multiplayer style AR experiences.

Regarding the technology, there is concern about privacy – which big tech companies already know is a dicey area. To work AR software needs to scan your surrounding area to get a gist of the location and suitable flat surfaces for digital object placement. Reuters reports that Google will store this scan data in the cloud, while ARKit will just be a two-player phone-to-phone system, which should mean players data is more secure and not stored in the cloud.

Currently no other details have surfaced regarding Apple’s ARKit plans but expect to hear about them a little later today once the WWDC 2018 keynote gets underway. CEO Tim Cook is focusing Apple’s R&D towards AR and not virtual reality (VR), having previously issued statements saying AR is: “big and profound.”

Of course alongside ARKit 2.0 many are expecting the first concrete news about iOS 12, with plenty of new features along with it. These will likely include further AR integration, with more Animoji for Apple’s iPhone X for example. If any of this does occur, you can be sure VRFocus will let you know and keep you up to date.

Apple betting on Windows-based AR

Well, well, well, look who wants to join and play! Apple’s developer conference WWDC just started and it’s finally the moment to see some in-house augmented reality development from Apple hit the stage! I didn’t even have the time to sort all my AWE conference notes, check all videos or talk about Ori’s keynote to push superheroes out into the world! Hm, guess I can´t resist, but need to write up the Apple news today:

One more thing… AR

So, Apple talks about their own big brother speaker for your living room, some other hardware and iOS updates, etc.pp, but then finally we get to learn about Apple’s plans to jump into AR! Pokémon plays the well-known example for the masses again. But this time using the new “ARKit” by Apple. Their new SDK toolset for developers that brings AR…

to your phone or tablet. Yep. No AR goggles (yet), but through a frame, a window to hold. As I discussed last week, this was highly expected. Apple AR will be seen through windows the next years, too. Apple won’t spearhead the glasses approach.

The presentation of this new toolkit is nicely done and it feels like AR has never been seen before. Craig Federighi is really excited – “you guys are actually in the shot here” – that one could think people at Apple were only thinking about VR lately and are surprised to see a camera feed in the same scene. He claims that so many fake videos have been around and now Apple is finally showing “something for real”. (Nice chat, but honestly, there have been others before. But, let’s focus:) Obviously Apple is good in marketing and knows their tech well. They have been investing in this a lot and now we can see the first public piece: in the demo we see how the RGB camera of the tablet finds a plain wooden surface of the table and how he can easily add a coffee cup, a vase or a lamp to it. The objects are nicely rendered (as expected in 2017) and have fun little details like steam rising from the coffee, etc. The shown demo is a developer demo snippet and shows how to move around the objects – and how they influence each other regarding lighting and shadows. The lamp causes the cup to cast a shadow on the real table and changes to object movements accordingly. In the demo section one could try it out and get a closer look – I’ve edited the short clip below to summarize on this. Next, we see a pretty awesome Unreal-rendered “Wingnut AR” demo showing some gaming content in AR on the table. Let’s take a look now:

The demos show pretty stable tracking (under the prepared demo conditions), Apple states that the mobile sensors (gyro, etc.) support the great visual software part using the RGB camera. They talk about “fast stable motion tracking” and as it was shown this can be given a “thumbs up”. The starting point seems to be the plane estimation to register a surface to place objects on. They don’t talk about the “basic boundaries” in detail – how is a surface registered? Does it have clear borders? In the Unreal demo we briefly see a character fall off the scenery into darkness, but maybe this works only in the prep’ed demo context. Would it work at home? Can the system register more than one surface? Or is it (today) limited to one height only level to augmented stuff? We don’t learn about this and the demo (I would have done the same) avoid these questions. But let’s find out about this later below.

Apple seems pretty happy about the real-time light calculation to give a more realistic look to it. They talk about “ambient light estimation”, but in the demo we only see some shadows of the cup and vase moving in reference to the (also virtual) lamp. This is out of the box functionality of any 3D graphics engine. But it seems they plan way bigger things, actually considering the real world light, hue, white balance or other details to better integrate AR objects. Metaio (now part of Apple and probably leading this dev) showed some of these concepts during their 2014 conference in Munich (see in my video from back then) using the secondary camera (face-facing) to estimate the real world light situation. I would have been more pleased if Apple showed some more on this, too. After all, it’s the developer conference, not the consumer marketing event. Why don’t they switch off the lights or use a changing spotlight with some real reference object on the table?

Federighi briefly talks about scale estimation, support for Unity, Unreal and SceneKit to render and that developers will get Xcode app templates to start things quickly. With so many existing iOS devices out in the market they claim to have become “the largest AR platform in the world” over night. Don’t know the numbers, but agreed that the phone will stay the AR platform of everybody’s (= consumer big time market) choice these days. No doubt about that. But also no innovation by Apple seen today.

The Unreal Engine demo afterwards shows some more details on tracking stability (going closer, moving faster) and how well the rendering quality and performance can be. No real interaction concept is shown, though – what is the advantage? Also, the presentation felt a bit uninspired – reading from the teleprompter in a monotone voice. Let’s get more excited, shall we? Or won’t we? Maybe we are not so excited, since it has all been seen before? Even the fun Lego demo reminds us of the really cool Lego Digital Box by metaio.

A look at the ARToolkit

The toolkit’s documentation is now also available online, so I planned to spend hours there last night. But to admit, it’s quite slim as of today, but gives a good initial overview for developers. We learn a thing or two:

first, multiple planes are possible. The world detection might be (today) more limited than on a Tango or Hololens device, but their system focuses on close-to-horizontal oriented surfaces. The documentation talks about “If you enable horizontal plane detection […] notifies you […] whenever its analysis of captured video images detects an area that appears to be a flat surface.” and “orientations of a detected plane with respect to gravity”. Further it seems that surfaces are rectangular areas since “the estimated width and length of the detected plane” can be read as attributes.

Second, the lighting estimation seems to include only one value to use: “var ambientIntensity: CGFloat”, that returns the estimated intensity in lumens of ambient light throughout the currently recognized scene. No light direction for cast shadows or other info so far. But obviously a solid start to help for a better integration.

They don’t talk about other things regarding world recognition. E.g. there is no reconstruction listed that would allow for assumed geometry to be used for occlusions. But, well, let’s hit F5 in our browsers during the next weeks to see what’s coming.

AR in the fall?

Speaking about what’s next. What’s next? Apple made a move that was overdue to me. I don’t want to ruin it for 3rd party developers creating great AR toolkits, but it was inevitable to come. While a third party SDK has the huge advantage of taking care of cross-platform-ness, it is obvious that companies like Apple or Google want to squeeze the best out of their devices by coding better low-level features into their systems (like ARKit or Tango). The announcement during WWDC felt more like “ah, yeah, finally! Now, please, can we play with it until you release something worthy of it in the fall?” Maybe we will see the iphone 8 shipping a tri-cam setup like Tango – or the twin-camera-setup is enough for more world scanning?

I definitely want to see more possibilities to include the real world, be it lighting conditions, reflections or object recognition and room awareness (for walls, floors and mobile objects)… AR is just more fun and useful if you really integrate it into your world and allow easier interaction. Real interaction. Not only walking around a hologram. The Unreal demo sure was only to show off rendering capabilities, but what do I do with it? Where is the advantage over a VR game (with possibly added positional tracking for my device)? AR only wins if it plays this advantage: to seamlessly integrate into our life and our real world vision, our current situation and enables a natural interaction.

Guess now it’s wait and see (and code and develop) with the SDK until we see some consumer update in November. This week, it was a geeky developer event, but we can only see if it all prevails when it hits the stores for all consumers. The race is on. While Microsoft claims the phone to be dead soon (but does not show a consumer alternative just yet), Google sure could step up and push some more Tango devices out there to take the lead during summer. So, let’s enjoy the sunny days!

The post Apple betting on Windows-based AR appeared first on augmented.org.

Apple’s ARKit is Bringing Augmented Reality to “hundreds of millions of iPhones and iPads”

Apple’s annual Worldwide Developer Conference (WWDC) is here, and today’s keynote saw a number of VR-specific announcements including Apple’s first VR-ready computers to go along with the launch of the company’s newest macOS High Sierra. While the company is finally going ‘VR-native’ for desktop, Apple is also zeroing in on augmented reality for iOS 11 with the entrance of their newly revealed app developer kit ‘ARKit’.

Possibly taking a swipe at Facebook’s latest AR demo at F8 in April, Senior VP of software engineering Craig Federighi said: “We’ve all seen a lot of carefully edited vision videos on this topic recently, but in the case, I’d like to show you something for real.”

Starting up a test application that will be made available to developers, Federighi explains that with the iPhone’s computer visions capabilities it’s able to map surfaces and add digital objects—replete with interactive animations and dynamic lighting. Adding a steaming coffee cup, a lamp and a vase to a bare, marker-less table, the tracking proves to be relatively solid.

Federighi says that ARKit provides fast and stable motion-tracking, plane estimation with basic boundaries, ambient lighting estimation, scale estimation, support for Unity, Unreal, SceneKit and Xcode app templates—all available on “hundred of millions of iPhones and iPads […] making overnight ARKit the largest AR platform in the world.”

Apple says iOS 11 will be made available to iPhone 5s and later, all iPad Air and iPad Pro models, iPad 5th generation, iPad mini 2 and later, and iPod touch 6th generation. iOS 11 will be released this fall, likely in tandem with iPhone 8 and iPhone 7S smartphones. A public beta is coming in June.

image courtesy Apple

Apple is working with third-parties such as IKEA, Lego, and Niantic to use ARKit, with Apple showing an improved Pokémon Go on stage that looks to actually utilize augmented reality to bring the game to life. Because ARKit uses computer vision that relies on the device’s onboard sensors and CPU/GPU, no external equipment is required to run these sorts of AR experiences.

The keynote also revealed a new AR-focused company from critically-acclaimed director and FX guru Peter Jackson called ‘Wingnut AR’. A special demo showed off the graphical and camera-based tracking capabilities of Apple’s hardware featuring a complex, real-time rendered scene digitally placed on a tabletop using an iPad. Wingnut AR is bringing an AR experience to the App Store later this year. Check out the video below to see Wingnut AR’s special Apple demo.

 

The post Apple’s ARKit is Bringing Augmented Reality to “hundreds of millions of iPhones and iPads” appeared first on Road to VR.