Mixed Reality Aufnahmen mit der HoloLens von Microsoft

Wenn man eine Anwendung für die HoloLens entwickelt hat, dann lässt sich diese App ohne Brille nur schwer vorführen. Microsoft rüstete auf seinen Events deshalb stets eine Kamera mit einer HoloLens aus, welche das Geschehen für die Zuschauer im Saal sichtbar machte. Bisher war diese Option Microsoft vorbehalten, doch jetzt können alle Entwickler mit einer zweiten HoloLens zur Aufzeichnung arbeiten.

Mixed Reality Aufnahmen mit der HoloLens von Microsoft

Da die HoloLens von Microsoft ein geschlossenes System ist und nicht über einen PC läuft, ist die Aufzeichnung etwas aufwendiger als eine Mixed Reality Aufnahme von Virtual Reality Inhalten. Ihr benötigt eine Kamera mit HDMI-Ausgang, eine zweite Microsoft HoloLens, eine Halterung für die HoloLens auf der Kamera, eine Anwendung die auch auf dem Desktop angezeigt werden kann (Shared Experience) und genügend Geduld beim Kalibrieren. Wenn euer System eingerichtet ist, dann könnt ihr damit Videos aufzeichnen, Fotos machen und die Inhalte auch per Live-Stream bei einer Demonstration zeigen.

Microsoft hat außerdem eine ausführliche Anleitung veröffentlicht, welche euch bei der Einrichtung helfen soll. Die benötigte Hardware wird mit ca. 4.000 € – 5.000 € zu Buche schlagen (je nach Anspruch) und zusätzlich sind gewisse Anpassungen in der Software notwendig. Deshalb richtet sich die Lösung auch speziell an Entwickler und nicht an Streamer.

Der Beitrag Mixed Reality Aufnahmen mit der HoloLens von Microsoft zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Microsoft Now Lets You Film Mixed Reality With HoloLens Hack

Microsoft Now Lets You Film Mixed Reality With HoloLens Hack

Showing off what Microsoft’s HoloLens mixed reality can do is difficult without actually putting it on someone’s head, but the company’s new camera hack is providing a helping hand.

Whenever we’ve seen HoloLens on-stage at shows like E3, or Microsoft’s own press conferences, the company will have one person wearing the kit, while the audience sees what they see using a special camera. After today, however, anyone with two HoloLens units can also record what one user is seeing thanks to what the company is calling ‘Spectator View‘.

It’s a pretty simple concept but you’ll likely need to do some shopping if you want to try it out. First off, you’ll have to download a specific app that enables other software to run as a shared experience. From there, you’ll have to assemble a camera rig with a camera that has an HDMI-out or photo capture SDK. You’ll need an aluminum bracket that connects the bottom of your HoloLens to the top of your camera, and a 3D printed adapter that will link the two together.

Fear not; Microsoft has an in-depth guide to assembling the rig, complete with the nuts and bolts you’ll need.

Spectator Mode is similar in concept to the mixed reality filming seen in the VR industry, though has added complications given HoloLens’ entirely independent solution and the fact that holograms can be viewed from anywhere, not just within a specific, tracked space.

Requiring a second $3,000 HoloLens means this isn’t the most cost-effective solution for demoing AR, but it does open up your HoloLens apps to a much larger viewing audience. It’s a first step in bringing mixed reality into YouTube videos or sharing images on social networks; imagine giving talks and swapping out Powerpoint slides for 3D data visuals that are far more engaging for the audience.

Ultimately we’d like to see dedicated cameras for this use, or perhaps a cheaper alternative from Google as its Project Tango AR tech continues to evolve. Given HoloLens itself is still in its developer kit stages, we wouldn’t expect to see official products from Microsoft that film in MR until the device itself is available to consumers. For now, this is a great way to show what HoloLens can do to as many people as possible.

Tagged with: ,

See What Others See in HoloLens with Microsoft’s Spectator View Tool

Today Microsoft has launched a new tool for HoloLens developers called Spectator View, thus enabling them to demonstrate their applications to an audience.

Whether its virtual reality (VR), augmented reality (AR) or mixed reality (MR) for people to understand the technologies the issue has always been about user interaction, getting someone to try it is usually easier than explaining it. For Microsoft and its MR headset HoloLens the company created Mixed Reality Capture (MRC) to aid in the visualization of holograms to let an audience see what the presenter on stage can see, but this was only from a first-person perspective. With the Spectator View tool this now becomes third-person.

HoloLensSpectatorView-1024x683

The tool is actually part software, part camera system in which a HoloLens is mounted to a DSLR. Using a special bracket – which can be 3D printed – the headset then connects to a PC wirelessly with the camera outputting via HDMI to a capture card. It’s then a case of starting up the Spectator View software to test it out.

As explained in Microsoft’s blog: “A spectator view camera will allow your audience to do more than just see what you see when wearing a HoloLens. Yes, it allows others, who aren’t wearing HoloLens, to see the holograms you would see if you were wearing the device, but it also allows you to see what the people wearing HoloLens are doing and how they are interacting with their mixed reality experience.” Checkout the blog for all the official documentation needed to get you started.

For all the latest HoloLens news, keep reading VRFocus.

Bringing Conversational Gameplay & Interactive Narrative to ‘Starship Commander’

Alexander-MejiaDeveloper Human Interact announced this past week that they are collaborating with Microsoft’s Cognitive Services in order to power the conversational interface behind their Choose-Your-Own-Adventure, interactive narrative title named Starship Commander. They’re using Microsoft’s Custom Recognition Intelligent Service (CRIS) as the speech recognition engine, and then Microsoft’s Language Understanding Intelligent Service (LUIS) in order to translate spoken phrases into a number of discrete intention actions that are fed back into Unreal Engine for the interactive narrative.

LISTEN TO THE VOICES OF VR PODCAST

I caught up with Human Interact founder and creative director Alexander Mejia six months ago to talk about the early stages of creating an interactive narrative using a cloud-based and machine learning powered natural language processing engine. We talk about the mechanics of using conversational interfaces as a gameplay element, accounting for gender, racial, and regional dialects, the funneling structure of accumulating a series of smaller decisions into larger fork in the story, the dynamics between multiple morally ambiguous characters, and the role of a character artist who sets bounds of AI and their personality, core belief system, a complex set of motivations.

Here’s a Trailer for Starship Commander:

Here’s Human Interact’s Developer Story as Told by Microsoft Research:


Support Voices of VR

Music: Fatality & Summer Trip

The post Bringing Conversational Gameplay & Interactive Narrative to ‘Starship Commander’ appeared first on Road to VR.

VR Experience Starship Commander Unveiled by Microsoft

The virtual reality (VR) industry is constantly striving for new ways to immerse players in virtual worlds, but seldom seen in videogames are voice controls. Now Human Interact, a studio only formed a year ago, alongside Microsoft have revealed the first look at its debut VR project, Starship Commander.

Starship Commander is choose-your-own-adventure VR narrative where players don’t use gamepad or motion controllers, all the interactions are controlled through speech. As the name suggests, players are the commander of a spaceship which is on a secret mission as part of a massive intergalactic war.

Starship Commander TrailerScreenshot3_1920

The experience utilises Custom Speech Service (formerly CRIS), part of the Microsoft Cognitive Services collection to allow a lifelike interaction with the videogame. The studio used Microsoft’s software due to the ability to train it with a custom script.

“We were able to train the Custom Speech Service on keywords and phrases in our game, which greatly contributed to speech recognition accuracy,” said Adam Nydahl, founder and principal artist at Human Interact on the Microsoft blog. “The worst thing that can happen in the game is when a character responds with a line that has nothing to do with what the player just said. That’s the moment when the magic breaks down.”

“Using the Custom Speech Service, we were able to cut the word error rate in half without sacrificing any latency,” adds Alexander Mejia, founder and Creative Director at Human Interact. “It responds as soon as a person stops speaking, which is incredible.”

Starship Commander is currently in development for Oculus Rift and HTC Vive although no release date has been set yet. For any further Starship Commander updates, keep reading VRFocus.

Microsoft kündigt Sprachsteuerung für Virtual Reality an

Irgendwann werden wir ganz natürlich mit einem Computer interagieren können. Auch wenn wir uns diese Zukunft schon viele Jahre ausmalen, so kommen wir ihr doch stetig einen Schritt näher. Microsoft arbeitet auch an dieser Zukunft und gerade bei Virtual Reality Anwendungen könnte eine Sprachsteuerung zu einem unkomplizierteren Umgang mit der Software beitragen.

Microsoft kündigt Sprachsteuerung für Virtual Reality an

Microsoft kündigt mit CRIS und LUIS zwei neue Angebote an, die eine bessere Spracheingabe als Siri oder Google Assistant ermöglichen sollen. Der Custom Recognition Intelligent Service (CRIS) soll es Unternehmen und Entwicklern erlauben, eine angepasste Spracherkennung anzulegen. Dazu müssen Beispieldateien (Audio und Text) hochgeladen werden, damit die Eingabe besser erkannt werden kann. Somit soll die Erkennung unter ungewöhnlichen Umständen (Beispiel: Lagerhalle) verbessert werden und die Spracherkennung soll auch in hochspezialisierten Einsatzgebieten (Beispiel: Medizin) gute Ergebnisse erzielen.

Hinter der Bezeichnung LUIS versteckt sich der „Language Understanding Intelligent Service“. Laut Microsoft soll dieser Service die Bedeutung hinter Worten erkennen können. Somit müsst ihr nicht genaue Phrasen auswendig lernen, sondern der Service versteht auch so, was ihr sagen wollt.

Microsoft wird aber die VR Integration nicht selbst übernehmen, sondern die Entwickler sind selbst gefragt.

Die erste Beispielanwendung kommt von Human Interact und das Spiel bietet eine komplette Integration der Tools von Microsoft.

Wie im Video zu sehen, könnte der Service wohl mindestens auf Augenhöhe mit Siri liegen und in VR wird eine solche Eingabe viele Vorteile bringen. Zwar gibt es bereits VR Spiele mit Spracheingabe, jedoch konnte man mit diesen nicht normal sprechen, sondern musste die genauen Befehle kennen. Mit LUIS könnt ihr jedoch wie mit einem Freund reden und dadurch kann auch die Immersion deutlich erhöht werden, da ihr vergessen könnt, dass ihr ein Videospiel spielt.

CRIS und LUIS sind ab sofort verfügbar, falls ihr in der Preview für aktuelle Entwickler seid.

(Quelle: Upload VR)

Der Beitrag Microsoft kündigt Sprachsteuerung für Virtual Reality an zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Microsoft Announces Siri Competitor with Voice-Activated VR Experience

Microsoft Announces Siri Competitor with Voice-Activated VR Experience

You may not be aware of this but for years now Microsoft has been steadily working to build the world’s smartest computer brain. Now, that brain is getting a whole lot smarter.

Meet CRIS and LUIS

Today, Microsoft is announcing Custom Speech Service, the latest program to join the ranks of Microsoft Cognitive Services. This is a suite of innovations that tackle emerging AI issues like computer vision and machine learning. Custom Speech Service is a highly adaptable voice-to-text program that is being positioned as a much more intelligent version of Siri or the Google Assistant.

Custom Speech service combines two bleeding-edge technologies to achieve this next generation capability. The first is known as CRIS, or Custom Recognition Intelligent Service. According to Microsoft, CRIS:

“…Provides companies with the ability to deploy customized speech recognition. The developer uploads sample audio files and transcriptions, and the recognizer is customized to the specific circumstances. This can make recognition far better in unusual circumstances, such as recognition on a factory floor, or outdoors.”

Essentially what this means is that CRIS allows a given organization to build its own unique lexicon of voice commands to make specialized voice-to-text commands work better. So a hospital, for example, could build in a complex list of procedures or afflictions for patients to enquire about.

Joining CRIS to make Custom Speech Service as powerful as possible is LUIS (Language Understanding Intelligent Service). LUIS is described by Microsoft as an “intent engine” and with its help computers can understand the meaning behind our words. For example, what happens in current voice commands is that a specific word or phrase is mapped intentionally by a programmer to a given action. “Find coffee” or “take me to coffee” will both bring up your maps application and direct you to a nearby coffee shop. With LUIS, however, you could say “find coffee, take me to coffee, I need coffee, I need a little pick-me-up, I can’t keep my eyes open, etc.”

LUIS is trained to understand what we mean, not just what we say, and with its help a much larger swath of voice commands can be usable to consumers with much less effort on the part of programmers.

Starship Commander – VR Powered by Voice

Custom Speech Service is just that, a service. Microsoft itself is not necessarily building a product around the program. That job falls to clients such as  Human Interact — a virtual reality content studio.

Human Interact’s debut project is Starship Commander, a voice-powered VR experience that takes full advantage of Microsoft’s powerful new tools. UploadVR had the chance to visit try Starship Commander firsthand and what we found is the most sophisticated voice interaction engine we’ve yet to see in an immersive application.

Starship Commander is more interactive film than pure game and the entire thing revolves around voice. You play as the pilot of an interstellar spacecraft, joined on your mission by a supercomputer and a holographic superior officer. You interact with both of these characters through voice commands with world of branching options to explore.  In my short demo I experienced maybe 20 lines of dialogue, but the developers explained that this was barely scratching the surface of the hundreds they had programmed.

CRIS in action while playing Starship Commander

Starship Commander was built using CRIS and LUIS and, as a result, the characters in the game were able to understand and respond to unique vocabulary about spaceships and aliens. Thanks to LUIS They were also able to understand correctly what I wanted to do even if the exact phrasing I used had not been manually mapped to a given outcome. By saying “let’s move on” I was able to advance the story. Even though that particular combination of words was not attached to that specific command, the experience was able to read my intent thanks to Microsoft’s brand new bag of tricks.

With Custom Speech Service and an entire fleet of Cognitive Services (eight are available now with 17 in preview to select developers), Microsoft is on a mission to “make AI available to every organization and every person.”

Tagged with: , , , , , , ,