Facebook Open-sources AI Habitat To Help Robots Navigate Realistic Environments

Facebook Open-sources AI Habitat To Help Robots Navigate Realistic Environments

Facebook AI Research is making available AI Habitat, a simulator that can train AI agents that embody things like a home robot to operate in environments meant to mimic typical real-world settings like an apartment or office.

For a home robot to understand what to do when you say “Can you check if laptop is in the other room and if it is, can you bring it to me?” will require drawing together multiple forms of intelligence.

Embodied AI research can be put to use to help robots navigate indoor environments by marrying together a number of AI systems related to computer vision, natural language understanding, and reinforcement learning.

“Habitat-Sim achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU, which is orders of magnitude faster than the closest simulator,” a dozen AI researchers said in a paper about Habitat. “Once a promising approach has been developed and tested in simulation, it can be transferred to physical platforms that operate in the real world.”

Facebook Reality Labs, formerly named Oculus Research, is also open-sourcing Replica, a data set of photorealistic 3D environments like a retail store, apartment, and other indoor environments that resemble the real world. AI Habitat can work with Replica but also works with other embodied AI research data sets like Matterport3D for indoor environments.

Simulated data is commonly used in AI to train robotic systems, create reinforcement learning models, and power AI systems from Amazon Go to enterprise applications of few-shot learning with small amounts of data. Simulations can allow environmental control, reducing costs that arise from the need to collect real-world data.

AI Habitat was introduced in an effort to create a unified environment and address standardization for embodied research by the robotics and AI community. To that end, Facebook also released PyTorch Hub earlier this week.

“We aim to learn from the successes of previous frameworks and develop a unifying platform that combines their desirable characteristics while addressing their limitations. A common, unifying platform can significantly accelerate research by enabling code re-use and consistent experimental methodology. Moreover, a common platform enables us to easily carry out experiments testing agents based on different paradigms (learned vs. classical) and generalization of agents between datasets,” said Facebook.

In addition to the Habitat simulation engine, the Habitat API provides a library of high-level embodied AI algorithms for things like navigation, instruction following, and question answering.

Facebook released the PyTorch Hub platform for reproducibility of AI modelsearlier this week.

Researchers found that “learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations” and that only agents with depth sensors generalize well across datasets.

“AI Habitat consists of a stack of three modular layers, each of which can be configured or even replaced to work with different kinds of agents, training techniques, evaluation protocols, and environments. Separating these layers differentiates the platform from other simulators, whose design can make it difficult to decouple parameters in order to reuse assets or compare results,” the paper reads.

AI Habitat is the latest Facebook AI initiative to use embodied AI research, and follows research to train an AI agent to navigate the streets of New York with 360-degree images and to get around an office by watching videos.

Facebook VP and chief AI scientist Yann LeCun told VentureBeat the company is interested in robotics because the opportunity to tackle complex tasks attracts the top AI talent.

AI Habitat is the most recent example of tech giants attempting to deliver a robotics creation platform for AI developers and researchers. Microsoft introduced a robotics and AI platform in limited preview last month, while Amazon’s AWS RoboMaker, which draws on Amazon’s cloud and AI systems, made its debut in fall 2018.

How AI Habitat works was detailed in an arXiv paper written by a team that includes Facebook AI Research, Facebook Reality Labs, Intel AI Labs, Georgia Institute of Technology, Simon Fraser University, and University of California, Berkeley.

AI Habitat will be showcased in a workshop next week at the Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California.

In other recent contributions to the wider AI community, Facebook AI research scientist Mike Lewis and AI resident Sean Vasquez introduced MelNet, a generative model that can imitate music and the voices of people like Bill Gates.

Major object detection AI systems from Google, Microsoft, Amazon, and Facebook are less likely to work for people in South America and Africa than North America and Europe, and less likely to work for households that make less than $50 a month.

Facebook VP of AR/VR Andrew Bosworth earlier this week said new Portal devices — the first after the video chat devices were introduced in October 2018 — will make their public debut this fall.

Facebook also announced plans to open an office with 100 new AI roles in London.

This post by Khari Johnson originally appeared on VentureBeat.

Tagged with: , ,

The post Facebook Open-sources AI Habitat To Help Robots Navigate Realistic Environments appeared first on UploadVR.

AR Innovator Phiar Raises $3m for its Navigation System

Ever had problems with your sat nav where you’ve missed a turning because the directions weren’t clear, whether via the audio or confusing screen image? Well, augmented reality (AR) startup Phiar (pronounced ‘Fire’), is developing its own smartphone navigation system to combat just those problems, recently securing $3 million USD in funding to aid development.

Phiar

The idea behind the Phiar system is to make using satellite navigation easier and safer, so drivers don’t have to work out what the map is telling them whilst driving.

“The idea came after taking too many wrong turns on the streets of Boston,” explains Co-Founder and CEO Dr. Chen-Ping Yu in a statement. “Trying to interpret directions from a two-dimensional map, especially at high speeds, is as difficult as it is dangerous. Navigation should be convenient and straightforward, and what we’re building is going to help people get to their destination faster and safer.”

Drivers will be able to download the app onto their smartphone then secure the device to their dashboard or windscreen for easy viewing. The app then augments the actual world in front of them for straightforward navigation.

“We want our users to keep their eyes on the road, looking at the real world rather than a 2D rendering of it,” says Ivy Li, Phiar Co-founder and CTO. “What makes the experience so unique is the live AR path overlays, made possible by our super-efficient computer vision and deep learning AI that runs on your smartphone. This augments your surroundings rather than distracting you from them.”

Phiar

Phiar formed during a Y Combinator in early 2018 to solve directional and safety issues experienced by users of traditional two-dimensional navigation systems. The recent seed round was co-led by the Venture Reality Fund and Norwest Venture Partners, with additional participation from Anorak Ventures, Mayfield Fund, Zeno Ventures, Cross Culture Ventures, GFR Fund, Y Combinator, Innolinks Ventures and Half Court Ventures.

The company expect to launch the software onto mobile app stores in mid-2019. For further updates from Phiar, keep reading VRFocus.