Inglobe supports the Advanced School in Artificial Intelligence

Artificial Intelligence (AI) is, today, one of the hottest technology topics across different business communities thanks to the growing public attention towards the technical achievements in the field and the ongoing investments both by Governments and Tech Giants. AI is already dominating the debate around innovation and digital transformation and it is expected to have an even larger impact in many sectors, including industry, finance, healthcare, education and safety.

The way we learn and work will drastically change in the near future, that means we need to rethink our educational system and to prepare for the challenges of the future by planning investments and fostering the research around topics such as Machine Learning, Perceptual Computing, Cognitive Robotics and Language Understanding.

The AI market is worth about 5 billion dollars and it is expected to decuplicate its value within the next 5 years, a truly impressive growth rate!
Some current major uses of artificial intelligence include image recognition, object identification, detection, and classification, as well as automated geophysical feature detection. The largest proportion of revenues come from the AI for the enterprise applications market. (Source: Statista)

Italy is the fifth largest producer of most-cited scientific papers on machine learning after the
United States, China, India and the United Kingdom (Source: OECD)
That means that despite the lack of a central national strategy on AI, like the one the French Government is adopting, Italy is one of the most advanced countries in terms of research and development, counting many researchers and organizations working on AI-related topics.

The Institute of Cognitive Sciences and Technologies (ISTC-CNR), a leading Italian research center on AI, in collaboration with other organizations such as the Italian Association for Artificial Intelligence (AI * IA) and the Italian Association for Machine Learning, has recently designed an educational program called “Advanced School in Artificial Intelligence” (AS-AI) in order to help post-graduates and professionals face the fast-growing demand of the job market for new skills and competencies related to advanced fields of Artificial Intelligence.

Inglobe Technologies is proud to support this initiative by sponsoring the School and by offering the course of “Visual Perception and Spatial Computing” which will be taught by its Chief Research Officer, Luigi Freda.
This course will provide an introduction to visual SLAM and real-time techniques, focusing on how to:

  • robustly localize a camera system with respect to the environment
  • compute a dense 3D reconstruction of the surrounding scene
  • segment the obtained 3D model by using both geometry and semantics
  • use deep learning for empowering advanced scene understanding and improve SLAM performances
  • The course will present emerging spatial AI techniques which have many potential applications, including mixed-reality, virtual reality, and cognitive robotics.

    During and after the courses, participants who will enroll in the Advanced School in AI will be involved in an internship program with the partners of the program, organizations and companies, including Inglobe Technologies.

    AS-AI is a great opportunity for the participants to start building a valuable professional network while at the same time empowering their professional AI skills and competences. To enroll in the program, please consult the Enrollment procedure and the requirements applicants must have in order to join the Advanced School in AI → https://as-ai.org/apply/

    ABI Research Provides New Report On Eight Technologies That Will Transform Manufacturing

    ABI Research, a market-foresight advisory firm that provides strategic guidance on compelling transformative technologies have released a recent report that outlines how technologies fit together with each other in Smart Manufacturing and identifies vendor challenges and solutions within the sector. The eight areas the report covers include additive manufacturing, artificial intelligence (AI) and machine learning (ML), augmented reality (AR), blockchain, digital twins, edge intelligence, Industrial Internet of Things (IIoT) platforms, and Robotics.

    ABIResearch_Company_Logo

    Within the manufacturing sector there has already been an increased adoption of IIot platforms and edge intelligence. Over the next ten years, it is expected that manufactures will start piecing together the other new technologies to eventually lead to a more dynamic factories less dependent on fixed assembly lines and immobile assets.

    “Manufacturers want technologies they can implement now without disrupting their operations,” says Pierce Owen, Principal Analyst at ABI Research. “They will change the way their employees perform jobs with technology if it will make them more productive, but they have no desire to rip out their entire infrastructure to try something new. This means technologies that can leverage existing equipment and infrastructure, such as edge intelligence, have the most immediate opportunity.”

    With a transition towards a lights-out factory already in motion, the major disruption will require an overhaul of workforce, IT architecture, physical facilities and equipment and full integration of a number of new technologies including connectivity, additive manufacturing, drones, mobiles collaborative robotics, IIoT platforms and AI, according to the report.

    The report also notes that the above technologies have already started to converge and that robotics provide a physical representation of this convergence. Their use of AI and computer vision and connect to IIoT platforms where their digital twins are located. This connectivity, along with AI, will increase in importance as more robots and technology join the assembly line and work alongside humans and each other.

    ABI Research’s full report, titled Smart Manufacturing Transformative Horizon, is available to be read in full and is part of the company’s Smart Manufacturing research service. Back in May ABI Research released a report that predicts AR will struggle in the brick and mortar retail environment.

    For more on ABI Research in the future, keep reading VRFocus

    Reconstruct Your Favourite Movie in AR

    Have you ever dreamed of living out your favourite movie scene? Want to be right in the middle of a squadron of X-Wings during the Death Star run, or dodging Orcs and Goblins in Lord of the Rings? A new augmented reality (AR) tool called Volume may soon allow you to do just that.

    The developer behind Volume has utilised machine learning to create a tool that allows 2D videos and images to be converted into 3D spaces. The tool is still in relatively early stages of development, but the team has already successfully produced some proof-of-concept videos.

    The team behind Volume hope that the tool will eventually become a tool for access for storytelling, archiving and cultural reconstruction. The app is able to predict and reconstruct 2D footage in 3D and place the character’s within the user’s space in AR, enabling the user to see the movie in an entirely new way.

    The developers, Or Fleisher and Shirin Anlen, have already released a brief demo showing a scene from hit Quentin Tarantino movie Pulp Fiction converted into an AR experience. A video of the demo can be viewed below.

    Volume was inspired by recent research into immersive digital platforms and methods of 3D depth predictions. Volume uses a state-of-the-art machine learning algorithm. A Convolutional Neural Network allows the software to ‘observe’ RGBD images and build a model of how to reconstruct those 2D images into a 3D space.

    The intention is for Volume to become an end-to-end web app, able to be easily accessible from any compatible web browser and capable of taking any input from 2D still images to video sequences or GIFs and convert it into an immersive 3D experience. The aim is for Volume to support content for AR, virtual reality (VR) and web platforms, with its models being flexible enough to support a variety of uses, from academic research to media and entertainment.

    Further information can be found on the official Volume website. VRFocus will bring you further news on this project once it becomes available.

    Accessible XR Development After VRTK

    As a developer moving from the web and app world into 3D and XR, I’ve had to constantly re-evaluate my platform and tool choices as the industry evolves at tweetstorm velocity. Today’s XR development pipeline is clogged by a glut of proprietary hardware and software APIs and SDKs by competing firms like Oculus, HTC Vive, Microsoft, Google, Apple, Sony and SteamVR — to say nothing of emerging third-party peripherals like Logitech’s VR-tracked keyboard, the new AR-enabling Zed Mini dual-eye camera for the Rift or Vive, or any other industry-disrupting Kickstarters that might’ve sprung up since I started typing this paragraph.

    Left to right: a bunch of cool stuff I want.

    Each platform’s fine — even technologically stunning, one might argue — with respective strengths, weaknesses and use cases. But the distinctions force XR developers to ask hard questions: Where is the market going? How do I invest my skill-building time? What devices should my app support? What platform can I get a job working on? Developers must be business analysts as much as creative technologists to stay relevant. It’s easy to suffer choice paralysis with such a wide array of options, and easier still to bet on the wrong technology and lose.

    Personally, I also face certain technical, logistic and financial realities as an independent XR developer in the Midwest (US), where the industry hasn’t proliferated as it has in major coastal cities. Thankfully, game engines like Unity and Unreal are rapidly democratizing this space. Both engines seek to bridge the gaps between the various XR SDKs, employing thousands of engineers to ensure their software plays nicely with just about any significant third-party API. For example, as I wrote about in August, the Oculus SDK integrates beautifully with Unity and comes equipped with many of the scripts and prefabs needed to quickly prototype, develop and deploy a custom Rift app.

    I miss bossing around my hand-modeled #MadeWithBlocks BB-8. Check out my deep dive on this project, The Future of VR Creation Tools.

    That’s fantastic, but it’s still non-standard. To port the same Unity app to the HTC Vive or a Windows HMD is non-trivial — not impossible or even terribly difficult, but non-trivial. Maintaining your app for multiple SDKs over the long haul is similarly non-trivial. Non-trivial costs money and time and we’re all short on both.

    Instead imagine if XR practitioners had to worry less about betting on the right platform or device and could instead focus on creating unique and compelling experiences, content and UX. The first step down that path was VRTK — but sadly, one of the best tools to combat the VR SDK surplus will soon be hobbled by the loss of its founder.

    VRTK: The Open Source Approach

    This free, open source Unity toolkit aims to knit together a single workflow for a variety of VR APIs. It comes with the same stock prefabs and scripted mechanics you might find included in any single proprietary SDK, but makes each piece of functionality identical whether deployed to Oculus, SteamVR (read: Vive and, with v3.3.0, Windows HMDs) or Daydream — covering all major VR HMD manufacturers today.

    It’s a boon to anyone wanting to dip their toes in the waters of VR development. Think of it: Want to implement teleportation locomotion over a Unity NavMesh? Just drop the component onto your player prefab. Want to test out grab mechanics, or a quick bezier pointer? VRTK’s demo scenes have you covered, and they’ll work easily on a variety of devices. Since it’s open source, you’re also free to dive in and customize the code. Struggling to get a feature working in your own project? Check out this implementation on a varieties of SDKs — not a bad way to grok new XR coding concepts.

    Sadly, VRTK’s creator is sunsetting the woefully underfunded project. The UK-based developer TheStoneFox — who until recently was actively seeking contributors, partnerships and support — announced recently that he would will be stepping back from the project post-version 3.3.0. Though VRTK boasts an active Slack community, a growing list of “made with” titles and a recent Kickstarter, TheStoneFox was unable to attract the support necessary to sustain it for the long term.

    Now, as the opportunity to contribute to and utilize a premier open-source VR development pipeline expediter will fade going forward, what if anything will replace it?

    OpenXR: One API to Rule Them All

    The VRTK approach —using Unity scripting to knit together similar mechanics across a spectrum of VR SDKs — is necessary in the current fragmented development landscape, but there are downsides. Some community still has to monitor the various proprietary SDK updates and your end-user VRTK app still has to be mindful of VRTK’s changes over time. In this way, VRTK treated the symptoms of the VR SDK overload, but was not equipped to address the root cause. Enter OpenXR, The Khronos Group’s upcoming industry standard:

    The standard, announced December 2016, is being written now and is quickly gaining traction among industry players (with the notable exception of Magic Leap). Instead of forcing developers to grapple with variable propriety SDKs and all the accompanying business consequences, companies will instead tailor their hardware and software to comply with OpenXR’s spec. Khronos, the non-profit responsible for shepherding the Vulkan, OpenGL, OpenGL ES and WebGL standards, is leading the charge. Cue the infographics!

    On the left, the problem — on the right, the solution:

    Images courtesy of https://www.khronos.org/openxr.

    “Each VR device can only run the apps that have been ported to its SDK. The result is high development costs and confused customers — limiting market growth,” reads some fairly accurate marketing copy on their website. “The cross-platform VR standard eliminates industry fragmentation by enabling applications to be written once to run on any VR system, and to access VR devices integrated into those VR systems to be used by applications.”

    A working group of industry heavyweights have agreed the standards be extensible to allow for future innovation and should support a range of experiences — anything from a 3-DoF controller all the way to a high-end, room-scale devices.

    The only thing missing is a realistic timetable before this standard has an impact on the development community and its day-to-day workflow. Until the market-movers get their act together, we’ll be left scrambling (and patching up VRTK projects, in many cases).

    OpenXR supporters: everyone except Magic Leap.

    The Cinema of Attractions: Slow Your Reel

    But should we so quickly welcome industry standardization while the technology is still so new and full of possibilities? That’s the question asked in a recent Voices of VR podcast by Kent Bye and Rebecca Rouse. The two discussed the early days of cinema — when exploration and experimentation were the status quo — and Rouse drew striking parallels between that era and the current period in XR production and development.

    Pure spectacle then and now. Left: a Cinema of Attractions-era still. Right: Chocolate VR.

    “[Scholars of early film] came up with this term ‘cinema of attractions’ because they saw an incredible wealth of diversity and kind of range of exuberant experimentation in those early pieces, so they were very hard to sort of clump them together — there was such diversity — but this ‘attraction’ idea was a large enough umbrella, because all of those early pieces are in some way showing off the technology’s capabilities and generate this experience of wonder or amazement for the viewer. And the context in which they were shown is that of attractions, so they were shown at world’s fairs and as a part of vaudeville shows with other kinds of performances and displays.”

     — Rebecca Rouse, assistant professor of communication & media at Rensselaer Polytechnic Institute

    Sounds eerily familiar, huh? The whole podcast is well worth a listen, but tldr: while there are obvious consumer and market advantages to XR standards, Rouse argues that perhaps we shouldn’t jump the gun here— not during this era of frenetic, often avant garde XRexperimentation across art, science, cinema and gaming. Looking around the industry, it’s hard to disagree.

    EditorXR

    One man-eating-the-camera-brilliant new application of XR technology is Unity Labs’ EditorXR. Created by Unity’s far-future R&D team (whose roles often find them working on projects and products five-to-ten years away from consumer adoption), EditorXR offers you an interface to create custom XR Unity scenes entirely within virtual reality.

    Oh! And there’s flying, among other superpowers — soar through your scene like Superman or scale the whole thing down to a pinhole. They’ve literally ported the Unity inspector, hierarchy and project windows (again among others) to an increasingly user-friendly VR UI pane on your wrist. With the latest update, you’re able to:

    • hook into Google’s Poly asset database web API in real-time inside VR
    • create multiplayer EditorXR sessions for editing Unity scenes with friends and collaborators
    • run EditorXR with Unity’s primary version 2017.x editor

    It’s still new and I’ve encountered bugs, but it’s a foregone conclusion that this tech will become a standard feature of Unity’s scene creation process as XR technology matures and proliferates. Even their alpha and beta efforts evoke the same sense of wonder and possibility that early Cinema of Attractions-era moviegoers must have felt.

    For more insight on the design side, check out this deep dive on the future of XR UX design by Unity Lab’s Dylan Urquidi or the Twitter feed of Authoring Tools Group Lead, Timoni West.

    ML-Agents

    Another experimental Unity project, ML-Agents, explores one of the most promising avenues for the future of XR development, design and UX: machine learning. Using so-called “reinforcement learning” techniques which expressly don’t feed the AI model any sample data or rules for analysis, ML-Agents instead applies simple rewards and punishments (in the form of tiny float values) based on the outcomes to their [usually very narrowly defined set of] behaviors.

    Stretched out over hundreds of thousands if not millions of trial-and-error training sessions, the computer experiments with its abilities and forms a model for how to best achieve the desired goal. In this way, your Agents become their own teacher s— you just write the rubric.

    The original GitHub commit contained some basic demo scenes and the development community quickly took up the torch from there. Unity’s Alessia Nigretti followed up the original blog with one describing how to integrate ML-Agents into a 2D game. On Twitter, @PunchesBears has been demonstrating similar concepts — and showing that often enough, Agents respond to developers’ carefully calculated reward system in ways they don’t anticipate. Similar to actual gamers, no?

    In one of my favorite applications of ML-Agents, the developer Blake Schreurs actually brings a 6-DoF robo-arm Agent trained to seek a moving point in space into virtual reality — with slightly terrifying results once he assigns that moving target to his face.

    Imagine someone applying this training model to actual robotics and fat-fingering the wrong key. Or don’t, whatever. 

    He’s down for the count! I was immediately reminded of the audiences pouring out of theaters in 1895, afraid they’d be run down by the Lumière brothers’ Arrival of Train at La Ciotat. We’re still in the salad days of both machine learning and XR development compared to where we hope to be 10 or even 50 years from now. In that time, some combination of traditional or procedural AI with these new machine learning approaches will doubtless lead to great developments in gaming and XR at large — or even in the very design process and daily workflow of computing itself.

    Rift OS Core 2.0

    With Rift’s new Core 2.0 OS, your entire Windows PC is accessible from your right-hand menu button. Being able to view and use your desktop apps, as well as pin windows inside other VR apps, introduces new possibilities for XR workflows (and even for traditional computing workflows) in VR.

    While working on my next project, entirely within VR, I can watch Danny Bittman’s great Unity rendering and lighting tutorial on YouTube in a pinned browser while messing with those same settings on my wrist in UnityXR. I can watch @_naam craft original assets in Google Blocks at the same time I do, or I could gather assets from the Poly database and deploy them to my Unity scene in real-time VR, pulling up Visual Studio to code some game logic as I please.

    That sounds pretty goddamn metaversal to me — and before long, we likely won’t even need code.

    The XR Developer of the Future Is Not a Developer

    If XR technology is to go mainstream, the development process must be as efficient and accessible as possible — and likely even open to non-developers through content creation and machine learning applications. Spanning sciences and disciplines, there’s so much more to talk about and speculate over that this piece hasn’t even touched on (next time I’ll examine WebVR and A-Frame as viable XR development pathways). More and more pieces of this accessible, standardized XR development pipeline will fall into place as the immersive computing revolution rolls on, though I’m thankful the XR industry isn’t ready to ditch its Cinema of Attractions ethos quite yet.