Google Announces VR Labs For Remote Learning

It has already become clear that immersive technology such as virtual reality (VR) has a lot to offer the world of education. Several higher education facilities have begun to experiment with the technology, and at Google I/O, it was announced that Google will be rolling out a VR laboratory that can be used by remote learning students.

For many modern students, education needs to be balanced with work, either part time or full time. Those students can find a great deal of benefit by taking online classes, but still face obstacles for practical activities, which is where VR comes in.

Google VR Labs / Labster

For courses such as biology, there are few alternatives to practical experience. A practical lab led by an experiences teacher is a vital part of the course for many qualifications, including biology, but this leaves out those remote students who might be unable to attend these practicals.

In order to enable students to participate in these practical classes, Google have teamed up with a company called Labster to create a VR laboratory which means that colleges and universities will be able to offer fully online, remote courses such as biology.

The Google and Labster technology uses advances simulations to reflect real-world outcomes and mathematically accurate equations which have been built using the Google Daydream platform. This has advantages for the academic institutions in that the VR simulations are cheaper than the equivalent equipment costs.

Remote students will be able to engage with more courses, meaning that institutions can offer more places to students. The students will also be able to take advantage of lab time when and where they want, for as long as they want – not a privilege available with a real-world physical lab.

Google VR Labs / Labster

32% of students took an online course during 2017, so the provision for VR online learning has many implications for the future of learning. VRFocus will continue to bring you news on VR in training and education.

Google Introduce WebXR Standard To Chrome

Earlier this year Mozilla introduced a new standard called WebXR that could be used to deliver mixed reality (MR) content directly to web browsers. This API was an advancement of the existing WebVR technology. Google have embraced this idea and are bringing it to its Chrome browser.

At Google I/O during a session titled The Future of the Web Is Immersive’ speakers Brandon Jones and John Pallett demonstrated some of the new features of WebXR.

WebVR was well received for the most part, but did come with its own set of challenges and problems. As a result of extensive feedback, WebXR was developed to address those problems and also to bring augmented reality (AR) and MR content to the browser without need for a headset.

Speaker Brandon Jones indicated that WebXR offered many advantages over the older WebVR standard, including more optimisations for the browser and cleaner, more consistent and predictable operations, which makes development easier for coders. The new API is also compatible with a wider range of devices, opening up new audiences for the content on offer.

In the optimisation area, and example as offered regarding the rendering of VR content on a Google Pixel XL smartphone. Using WebVR, a resolution of 1145 x 1807 is possible, with 2 million pixels. Under the new WebXR standard, the same content can be rendered at 1603 x 2529, with 4 million pixels. This doubles the number of pixels, whilst maintaining the same high framerate.

This new technology means that developers can make immersive web-based content look better and run more smoothly simply by switching to the new API. Google have made WebXR Chrome support available as an Origin Trial starting in Chrome 67, which is currently in Beta. For developers who wish to explore the features without deploying it, WebXR can be abled by using the About Flags in Chrome.

For further news about WebXR and other updates from Google I/O will continue to be covered here on VRFocus.

Google Announces Major Update For ARCore

The Google I/O conference has been showcasing new technologies and advancements for existing products. One of the new developments has been a major update for ARCore, Google’s augmented reality (AR) toolkit.

ARCore is the successor to Google’s Project Tango, and the 1.0 version was launched at the Mobile World Congress in Barcelona in early 2018. With the new update, Google are introducing several new and improved features, such as social AR.

ARCore

ARCore brings enhanced and extended AR capabilities to over 100 million Android devices. Since its launch, developers have been able to push AR-capable apps with ARCore capabilities to the Play Store, which are compatible with 13 different models that supported the toolkit at launch, including Google’s own Pixel, Pixel XL, Pixel 2 and Pixel 2XL smartphones.

The toolkit allows for more complete environmental capabilities, which allows developers to place AR objects on surfaces such as furniture, posters, books, posters and more. With a newly introduced ability called Vertical Plane Detection, AR objects can be placed on more surfaces, such as textured walls, opening up new options for apps.

The new updates allow for collaborative AR experiences, such as playing multiplayer games or painting a AR community mural using a capability called Cloud Anchors. This capability will be available across both Android and iOS devices.

For developers, some faster AR development is being made possible with a technology called Sceneform, which lets Java developers build immersive 3D apps for AR without having to learn complex APIs such as OpenGL. This can be used to ass AR features to existing apps or build new AR apps from scratch.

ARCore update

The ARCore technology is considered more consumer-friendly that its predecessor Project Tango, since Tango required expensive additional equipment such as a depth-sensing IR camera and motion-tracking sensors to be built into the phone. As a result a large number of apps on Google Play Store have already begun to take advantage of the ARCore toolset, with more to follow in the near future.

For further news of announcements from Google I/O, keep checking back with VRFocus.

 

Google Unveil AR Visual Navigation

The Google I/O keynote contained a number of interesting new and improved technologies, such as new features for the Google Assistant. One of the reveals during the first day keynote involved a new augmented reality (AR) navigation tool for Google Maps.

During the first day keynote, Aparna Chennapragada spoke about some new features for Google Maps, where she discussed how the requirements that users have for Google Maps has changed, and much more is needed.

Google Maps AR / Google Lens

To help provide for these changing needs, the Google Maps team have worked to integrate Google Maps with the smartphone camera. To illustrate how this would work, Chennapragada talked about an example taken from real life; Imagine exiting from a train or subway station and being on your way to an appointment. Google Maps say to go South on High Street, but how do you know which way is South, and if it is an unfamiliar location, how do you know which one is High Street? This is where the Camera and AR integration comes in.

Instead of a top-down map, users will be able to see the street in front of them through the camera, with an AR overlay arrow pointing the direction and distance. The map view is just below, so users can double check that the two match up properly. The Google Maps team have even been experimenting with an animated guide character that you can follow, such as the animated fox shown briefly in the demo.

In addition, the Maps and Camera integration can be used to show users what shops, landmarks, hotels and restaurants are nearby, by tagging the information from Maps to the correct building, making it easier for users to find a location they are searching for.

In order to make this possible, GPS alone lacks the precision needed, so Google have been working on implementing a new system, referred to as VPS, or the Visual Positioning System. This can estimate a more precise position and orientation. VPS uses the visual features of the environment to provide a precise location.

Google Lens / Google Maps AR Fox

Further news from the Google I/O events will continue to be reported on here on VRFocus.