Mit der Lynx R-1 MR möchte das französische Startup Lynx eine neue MR-Brille auf den Markt bringen. Damit die Produktion starten kann, bittet das Startup auf Kickstarter um Starthilfe und konnte bereits über 600.000 Euro sammeln. Ursprünglich waren 300.000 Euro als Ziel angesetzt.
Lynx R-1 MR
Die Lynx R-1 wird auf einen Qualcomm XR2 Chip setzen, welcher auch in der Oculus Quest 2 oder HTC Vive Focus 3 verbaut wird. Zudem kann die Brille autark genutzt oder mit einem PC verbunden werden, man kann sich mit der Brille frei im Raum bewegen und AR wird über die Kameras an der Frontseite ermöglicht, welche das Bild auf den Displays ausgeben. Ähnlich wie bei der Konkurrenz.
Das Alleinstellungsmerkmal der Brille sind jedoch die besonderen Linsen und damit auch der Formfaktor. Lynx setzt auf “four-fold catadioptric freeform prism” wodurch die Linsen extrem nah am Display angebracht werden können.
Für die Steuerung der Brille setzt Lynx auf das Tracking eurer Hände. Wenn euch das nicht genug ist, könnt ihr optional auch zusätzliche Controller erhalten.
Wenn ihr Lynx unterstützen und euch eine Lynx R-1 sichern wollt, dann müsst ihr mindestens 530 Euro in das Projekt investieren. Hier findet ihr die Kickstarter-Seite und alle weiteren Informationen. Die Kampagne läuft noch 7 Tage und die ersten Brillen sollen bereits im April 2022 ausgeliefert werden.
San Francisco-based startup Kura Technologies (official website) claims it will launch compact AR glasses with a wide field of view, high resolution, high opacity, high brightness, and variable focus in mid-2020. We got the chance to try a series of four different prototypes from Kura that each demonstrated portions of these promises. We came away very impressed.
The nature of the demos we tried makes it hard to say what the actual finished device will be like, but we’re optimistic. None of the demos were shown on a product that resembles the mock-up images on their website and all four of our prototypes were described as being 8-12 months old from where they are at with the technology right now. You can read more about those demos deeper into this story.
Photographs and videos of any of the hardware at all were not allowed during my meeting — only workspace photos like the ones shown below.
We’re told Kura intends to bring a functional prototype to CES in January that will have all of the functionality in a single device. However, it’s worth noting that the device they plan to ship in mid-2020 is specifically targeting only enterprise customers first.
If they can pull it off — and that remains a big if — this startup will have created a product with specifications years ahead of all known public competitors, including Microsoft, Magic Leap, and Nreal.
The Kura Gallium
To understand more about why what they’re doing seems significant, let’s take a step back. Kura’s product is called Gallium. Kura describes Gallium as having “eyeglass form factor”, yet the claimed specifications are far beyond even any known large bulky AR headsets.
According to marketing materials, the glasses are said to be powered by a hip-mounted compute pack via a Snapdragon 855, similar to how Magic Leap and Nreal work, but when speaking with CEO Kelly Peng, she told us the initial version will likely be tethered to a PC at first before providing the compute pack as a secondary power option later. Then further in the future, an adapter could allow for wireless communication with a PC that’s within range.
Kura’s website lists the price for the glasses with the compute pack as $1199, a Lite version without the compute pack for $899, or the compute pack on its own for $399.
Claimed Specifications
Field of View: 150° (Binocular, Diagonal)
Focus: 10 cm to infinity
Brightness: 4000 nits (outdoor viewable)
Max Transparency: 95%
Resolution: 8K 75Hz / 6K 100Hz / 4K144
Image Quality: 100% DCI-P3, HDR, True Black Capable
IPD: Automatic accommodation of 55-68mm
Weight: 80 grams
These specifications would put Gallium significantly ahead of any known AR headset. HoloLens 2 and Magic Leap have a maximum diagonal field of view of just about 50 degrees. Magic Leap One does not have sufficient brightness to be used outside, and even HoloLens 2 has just 1/4 the claimed brightness of Gallium.
Crucially, Magic Leap One supports just two focal planes, and HoloLens 2 is fixed focus. With automatic IPD accommodation and varifocal from 10cm to infinity, Gallium would be visually comfortable to wear all-day.
If the company truly is achieving all this in an 80g pair of glasses, it would likely accelerate the arrival of consumer AR by years. But the magnitude of these claims should be met with skepticism — even after we tried many of these features in person.
The Prototype Demos
During our visit to Kura we tried out four different prototype demos and spoke with CEO Kelly Peng for nearly an hour. Our tour of the Kura workplace, which doubles as a home for several of the core team members, was eye-opening (pun intended) to say the least.
Three of the four demos did not have head tracking and were not on wearable devices. Instead, they were mounted on tables, completely stationary, to show off the display, optics, field of view, brightness, etc. in controlled environments. This is common for early head-mounted technology prototypes.
The first demo, which was stationary on a mount, was a great example of their optics technology using 90%+ transparent lenses combined with high-brightness images to really make objects look like they were in the real environment rather than suffering from the faded and blurry effect you get in at lot of current AR devices. The models shown were far brighter than anything I’d seen in an AR device before with great focal clarity. As a glasses wearer, the quality of the image and the field of view was really encouraging to see in such an early stage of development.
The second demo was also stationary, but this one had an even larger field of view and showed a larger range of animations and types of content. Most of the animations shown in the demo video embedded above were shown during this demo, and they looked about as crisp as you could hope for in an AR device. Again, I was pretty impressed. This demo also included hand tracking so I could reach out and see my hand moving around. There wasn’t any interaction here but it did show a wide range of colors.
Next up, in demo number three, I tried a fully wearable device similar to the one pictured below as an early prototype that had head and hand tracking operational. Again, this was said to be at least eight months old. This was another good demonstration of the field of view because when my hand reached out into the view of the cameras and lenses, it added a augmented overlay to my skin and I could interact with all of the floating multi-colored particles. I could reach my right arm across my body and see the AR overlay from my fingertips all the way down to my elbow. It didn’t have a postage stamp-sized vision box like in other AR devices. There was a delay trail when I moved my arm for the overlay to re-align itself — but again, early prototypes and all that.
Finally the last demo was the roughest and most experimental of the bunch. One of the Kura Gallium’s touted features is the focal distance that can adapt from 10cm all the way to infinity. In this demo I saw a green matrix-style animation of a cat floating in front of my eyes, almost large enough to look life-sized, and then it slowly shrank and faded into the distance like it was being shot into space, Bag Raiders style. The trick here though is that I could still clearly see it even as it drifted into the distance. It never lost focus.
Kura appears to be using Occipital’s technology for positional tracking and scene reconstruction. This lets Kura focus their resources on the display technology. An Occipital video from November 2018 appears to show footage of an old Kura prototype. This prototype has a form factor significantly bulkier than the images shown today on Kura’s website and it’s very similar to one of the prototypes we tried — specifically the third one mentioned above.
Prior to our demo we reached out to Peng about this prototype, who confirmed on Twitter: “This is purely software or integration test demo we built early on, not the optics we are going for product. We used to make some giant reflectors optics 3-4 years ago, but since then totally moved away from that because of the size, contrast ratio, brightness issues.”
“Structured Geometric Waveguide”
How exactly is Kura achieving this?
Before this announcement, no credible company has claimed specifications anywhere near these. As recently as October 2018, Facebook’s chief AR and VR researcherMichael Abrash stated that the technology to enable compact wide FoV AR glasses “doesn’t yet exist“.
Almost all AR headsets today, including HoloLens 2 and Magic Leap One, use a diffractive waveguide. This technology has a fundamental limitation on field of view, and can make the real world appear dull due to the semitransparent nature of the see-through optics. This is all despite both products having a larger form factor and higher price than Gallium.
Kura claims their breakthrough is to use a microLED strip with a “structured geometric waveguide” as the combiner. While microLED displays are normally expensive and there are ongoing efforts to figure out how to affordably mass produce them, Kura’s design would only need a single row of pixels, which would allow for low cost and mass production.
The company describes this as follows:
“Like in a diffractive waveguide, light is coupled down the eyepiece via total internal reflection, but unlike a diffractive system, the structures in the eyepiece are explicitly much larger than a wavelength, which prevents colored ghosts in ambient light. Furthermore, the out-coupling elements are ordinary geometric optics, not holograms, which mitigates the narrow angle of acceptance from which diffractive elements suffer from. In addition, a careful multi-layer design allows the out-coupling elements to cover about 5% of the eyepiece’s area, allowing us to maintain very high transparency.”
It is possible Kura Technologies invented the missing display technology needed to make mass market AR glasses achievable. Again, though, it is hard to confirm exactly how the Gallium works without seeing all the pieces put together into a finalized product design. It is not uncommon for technologies to be possible and impressive in the prototype phase but never work out as a true product due to issues such as manufacturing being too hard or the expense involved in producing hardware at scale.
But since we’ve seen the pieces all functioning, albeit mostly separately at this stage, we’re optimistic enough to say Kura seems to be one of the key companies to keep an eye on in the AR space.
Kura’s Team
Kura’s CEO Kelly Peng is on Forbes 30 under 30 for Manufacturing & Industry. At UC Berkeley, Peng says she worked on custom LiDAR designs for self driving vehicles. The CTO, Bayley Wang, was a high-performance optical simulation algorithms researcher at MIT and a math genius winner of a major North American undergraduate mathematics competition. The COO, Garrow Geer, is said to have been a particle accelerator operator and research engineer at Jefferson Lab and CERN.
Kura also tells us they have employed GoPro’s former lead electrical engineer, the designer of the electronics in the Xbox controller, “one of the most reputable optical experts in the world”, experts with over 100 patents in optics, displays and materials and several decades of combined experience in optical design, industry leaders with over 20 years of experience in AR/VR manufacturing and sales.
The company describes its team as “industry leaders, brilliant technologists and experienced subject-specific experts, with MIT, UC Berkeley, Stanford, EPFL & UBC alumni.”
Is Facebook Doing This Too?
Facebook, the company behind Oculus, is also developing AR glasses. While the company has not revealed any specifics on what display technology it is using, it did give several hints at Oculus Connect 5 in 2018.
When talking about display technologies, the company’schief researcher Michael Abrash stated that waveguides “could potentially extend to any desired field of view in a slim form factor“. On screen, a graphic showed waveguides as allowing for up to 200 degree field of view.
He also noted that since no suitable display technology existed yet for AR, “we had no choice but to develop a new display system“.
At the time, this confused some optics experts, as well known limitations of diffractive waveguides limit their practical field of view to around 50 degrees. Abrash’s description of waveguides did not reflect any known designs.
Given Abrash’s comments, Facebook’s large investment in AR research, and the company’s hiring of renowned display technologies experts like Douglas Lanman, it is possible Abrash was referring to a similar system to what Kura is working on — a non-diffractive waveguide using geometric optics.
We’ll keep a close eye on Oculus Connect 6 for any details on Facebook’s approach to AR optics and how it compares to what we’ve seen of Kura.
Stay tuned to UploadVR for more details on Kura, including our in-depth interview with CEO Kelly Peng next week.
This article about Kura is co-authored by Staff Writer David Heaney, who did the background research and wrote the first draft, Senior Editor David Jagneaux, who provided the hands-on impressions and additional details, and Managing Editor Ian Hamilton, who provided editing.
Editor’s Note: We added clarification that Kura is targeting enterprise customers first.
Das Warten auf die AR-Brille von Magic Leap hat ein Ende! Die Magic Leap One kann ab sofort auf der offiziellen Webseite des Anbieters ab 2.295 US-Dollar bestellt werden. Jedoch liefert das Unternehmen derzeit nicht nach Europa. Wenn ihr die Brille also nach Deutschland bringen wollt, müsst ihr aktuell entweder einen Weiterleitungsservice nutzen oder auf amerikanische Bekannte zurückgreifen.
Magic Leap One jetzt in den USA bestellbar
Magic Leap bezeichnet die aktuelle Version als “Creator Edition“. Dies soll vermutlich ein Hinweis darauf sein, dass es nur wenig experimentelle Software zum Start im Store geben wird. Entsprechend müssen die Entwickler/innen diesen zunächst befüllen.
Im Lightpack, so wird die kleine Recheneinheit der Magic Leap One bezeichnet, arbeitet eine NVIDIA Pascal-GPU mit 256 CUDA-Cores, zwei Denver-2.0-64-bit-Cores + vier ARM Cortex-A57 64-bit-Cores mit 8 GB RAM. An Speicherplatz stehen 128 GB zur Verfügung, während der Akku für bis zu drei Stunden dauerhafte Nutzung ausreichen soll. Zum Verbinden mit anderen Komponenten sind Bluetooth 4.2, WiFi 802.11ac/b/g/n und USB-C verbaut.
Bei der Lightwear gibt es aktuell noch keine genauen Spezifikationen. Jedoch sind Speaker und ein Anschluss für Kopfhörer verbaut und Spatial Audio ist möglich. Zumindest verspricht das Unternehmen, dass die Brille ihre Umwelt sowie umstehende Objekte erkennen soll. Dadurch sollen virtuelle Objekte auch in der realen Welt platzierbar werden. Zudem steht auf der Webseite, dass es demnächst auch Korrekturlinsen für die Magic Leap One geben wird, damit man seine herkömmliche Brille im Schrank lassen kann. Wie hoch die Auflösung ist und wie groß das Field of View ausfällt, verrät Magic Leap offiziell noch nicht.
Der Controller kann per USB-C geladen werden und soll bis zu 7, 5 Stunden 6DoF-Spielspaß bieten.
Zum Start wird es nur wenige Erfahrungen geben, die mit der Magic Leap One ausprobiert werden können. Dennoch soll eine soziale AR-Erfahrung auf euch warten und auch, der aus alten Videos bekannte Shooter Invaders, soll zum Start bereitstehen. Weitere Informationen zur neuen AR-Brille und den ersten Anwendungen findet ihr auf der Webseite von Magic Leap.
Zwar gibt es aktuell noch keine echte Ankündigung, aber CEO Rony Abovitz hat seinen Twitter-Account mit Hinweisen auf den 08.08.2018 vollgestopft und auf der Webseite von Magic Leap ist schon eine Rakete aufgestellt, die jeden Moment starten könnte. Ob die Rakete um 08:08am ET (14:08 Uhr) oder 8:08pm ET (02:00 Uhr am 09.08) starten wird, ist ungewiss. Wir sind zumindest sehr gespannt, was uns in den nächsten 24 Stunden erwarten wird.
Mit der Magic Leap One wird das Unternehmen zwar seine erste AR-Brille einführen, jedoch sprechen viele Faktoren gegen einen alltäglichen Begleiter. Doch Entwickler/innen können über den Formfaktor und die Grenzen der Technologie sicherlich hinwegschauen und bereits erste Anwendungen für die nächste Generation an Augmented Reality Systemen schaffen.
Ob es heute Reviews geben wird, können wir noch nicht sagen. Wir rechnen jedoch damit, dass es im Laufe des Tages eine Möglichkeit geben wird, die AR-Brille zu bestellen. Ob diese auch zeitnah geliefert werden kann, ist derzeit jedoch noch völlig offen.
Mit Jurassic World Alive steht ab sofort ein neues AR-Spiel für ARCore und ARKit zur Verfügung, welches in die Kerbe von Pokémon Go schlägt. Anstatt Pokémons zu sammeln, geht ihr auf die Jagd nach Dinos und trainiert diese für einen Kampf in einer Arena.
Jurassic World Alive veröffentlicht
Da wir derzeit leider noch mit Verbindungsproblemen zu kämpfen haben, können wir euch noch keine weiteren Einblicke in das Spiel präsentieren. Zwar kann die Anwendung kostenlos aus dem Store heruntergeladen werden, jedoch werden 10 Euro pro Monat für ein Abo fällig. Eine kostenlose Probezeit soll es jedoch geben.
Jurassic World Alive nutzt eine Kombination aus GPS und ARCore/ARKit, um die Dinos korrekt in der Welt zu platzieren. Smartphones, die keinen ARCore Support besitzen, können jedoch ebenfalls auf die Jagd gehen, wobei die echten ARCore-Funktionen deaktiviert werden. Welche Android-Smartphones aktuell den ARCore unterstützen, könnt ihr hier nachlesen.
Das Start-up WaveOptics aus dem United Kingdom konnte im letzten Jahr auf sich aufmerksam machen, denn beim Series-B-Funding sprangen über 15 Millionen US-Dollar für die Finanzierung der eigenen Vision heraus. Laut Business Insider hatten sich an der Finanzierung Octopus Ventures, Touchstone Innovations, Robert Bosch Venture Capital und Gobi Ventures beteiligt. Zum Ende des kommenden Jahres möchte WaveOptics mit einer neuen AR-Brille den Konsumentenmarkt erobern.
WaveOptics plant 600 US-Dollar AR-Brille für 2019
Aktuell ist nur wenig über die Hardware bekannt, jedoch sind sich die Entwickler beim Preis bereits relativ sicher und sagen, dass die AR-Brille unter 600 US-Dollar kosten wird. Zudem soll die Brille in einer Partnerschaft mit dem Nanotech-Hersteller EV Group (ECG) entstehen:
“Diese Partnerschaft markiert einen Wendepunkt in der AR-Branche und ist ein entscheidender Schritt in der Massenproduktion hochwertiger AR-Lösungen – eine Fähigkeit, die bisher nicht möglich war”, sagt David Hayes, CEO von WaveOptics. “Diese Zusammenarbeit ist der Schlüssel zur Entwicklung von AR-Wearables; gemeinsam sind wir gut positioniert, um Massenmarkt-Innovationen in AR zu bringen und neue Wege zur Skalierbarkeit zu geringeren Kosten als je zuvor zu eröffnen.” (Übersetzt mit www.DeepL.com/Translator)
Denkbar ist, dass auch andere Unternehmen von der Partnerschaft profitieren werden, da die Produktionskosten von speziellen Teilen durch eine Massenproduktion der EV Group gedrückt werden könnten. Zu den möglichen Spezifikationen der kommenden AR-Brille von WaveOptics gibt es noch keine Informationen, jedoch wird ein extrem großes Field of View angepriesen und die Bilder sprechen für eine Erkennung des Raumes. Erste Development Kits will das Team bereits im Juli 2018 anbieten.
Microsoft arbeitet weiter an der nächsten Version HoloLens, nun soll die AR-Brille schlauer werden. Ein Mitarbeiter des Unternehmens kündigt an, dass die Zentraleinheit der HoloLens eine zusätzliche AI-Prozessoreinheit erhält, die sich ausschließlich für die Berechnungen der künstlichen Intelligenz kümmern soll. Der Chip erweitert die Holographic Processing Unit (HPU) und basiert auf Deep Neural Networks.
HPU in der nächsten HoloLens mit AI-Chip
Marc Pollefeys, Director of Science und im Entwicklerteam der HoloLens, äußert sich in einem Blogpost zu dem neuen Chip, der dabei helfen soll, die reale Umgebung zu erkennen. Er verweist auf die großen Fortschritte von Deep Learning, das bei der Lösung dieser komplizierten Aufgabe hilft, aber auch auf die zwei großen Herausforderungen: man brauche beispielsweise eine große Menge an zuordnenbaren Daten, um das neurale Netzwerk anzulernen. Zudem verwende man eine Form von Berechnung, für die derzeitige nicht-spezialisierte Prozessoren nicht geeignet wären. Bisherige Lösungen in diesem Bereich zielten fast ausschließlich für den Einsatz in Cloud-Serverfarmen.
Die HoloLens als autarkes System mit Recheneinheit und Akku benötige eine andere Lösung, um Dinge wie das Hand-Tracking mit so niedriger Latenz wie möglich zu erledigen. Deshalb habe man einen eigenen Chip entwickelt, die Holographic Processing Unit, die alle Sensor-Daten auswertet. Microsoft entwickelt gerade die zweite Version der HPU, die einen eigenen AI-Prozessor enthalten wird und damit Deep Neural Learning direkt auf der Brille ermöglicht. Harry Shun, Vice-Präsident der AI-Forschungs-Gruppe bei Microsoft, stellte den Chip erstmals auf der Microsoft-Konferenz CVPR 2017 (Computer Vision and Pattern Recognition) auf Hawaii vor und zeigte eine frühe Version der HPU, die Live-Code für die Handerkennung ausführte.
Der AI-Koprozessor soll in der nächsten Version der HoloLens zum Einsatz kommen und laufend akkubetrieben im Einsatz sein, berichtet Pollefeys. Das wäre nur ein Beispiel von den neuen Fähigkeiten, an denen man derzeit für die nächste der AR-Brille arbeite. Abschließend betont der Microsoft-Mitarbeiter, wie wichtig ein langer Atem bei der Entwicklung von Mixed-Reality-Lösungen sei, schließlich repräsentieren Mixed Reality und künstliche Intelligenz die Zukunft der Computerberechnungen.
Bei der Entwicklung der HoloLens hat Microsoft entschieden, die Version 2 zu überspringen, und will direkt die dritte Version ausliefern – derzeit wird diese allerdings erst für das Jahr 2019 erwartet.
Augmented World Expo-2017 just got done and people are left with some old, some new and some innovative memories… We will see a lot of posts, updates, and images of products and companies that have made announcements and launches of their version “x” products… It is an exciting time for Augmented Reality and definitely an area to explore as we move forward…
The purpose of Conference?
Technology and display centers have always been a great attraction for years, these conferences help bring to light innovative or imitation-twister products that start-ups have been working on for some time. While some attend to understand and explore the emerging technology through the displayed products and services, some attend to enjoy these gadgets, some to invest & network and some to find new career opportunities. Overall these conferences are meet and explore places where the tech grows and shows its potential
The Take-Away
Many believe that disrupt technology is the new way of exploring & innovating ideas, the gathering of these conferences is to expand your thinking, explore opportunities get motivated and build new connections…
Another takeaway is to understand and connect with the Investor world, that is the backbone of the future for any technology expansion or for start-up growth …
The Motivation
For an entrepreneur it is very critical to have those few minutes of direct interaction with the Investor to explain his/her idea and build some interest which could lead to funding… while this is a noble thought, the path to funding is not this direct, or this simple.
Being an entrepreneur and a strong supporter of the start-up ecosystem, I have seen both sides of the equation. As an entrepreneur, the eagerness to reach the investor and get the funding and as an interested individual evaluating the start-up’s pitches to understand where and who may be the right fit for funding…….
The Investor Panel -AWE 2017
The second day of AWE 2017, I was invited to be a panelist for the Investor round table, along with my co-panelists from Canvas Ventures, Qualcomm Ventures, Motorola Ventures, Comcast Ventures and Brabantse Ontwikkelings Maatschappij (BOM)-Netherlands and it was a wonderful opportunity to be on the stage with some of the great minds. It was also an opportunity to understand the thoughts and answer the questions of the now rising entrepreneurs. While for start-ups, all Investors are equal as long as they are willing to listen and invest, the world of investment has its own cycles and this was shared very beautifully by my co-panelists during the session
While there were a lot of questions about the industry, landscape, and sector, three questions that stuck my mind and felt interesting were:
1) Do you take cold calling?
The clear answer is “NO”, but the thought behind the answer is different. Just like a job portal that takes your resume online, there is less than 1% chance you hear back (sometimes you do), however for the same job you go through a referral and the chances of getting the first visibility is close to 90% and landing with a interview is equally high , in the Investor world the story flows in a similar pattern. Any idea that comes through a network or strong referral will get their day to present and the cold calling, well…..
2) The counter-intuitive Advice to start-ups?
This question gets thrown at me a lot of times and my sincere answer is: know your product, know your team and know your market. Missing any of these dots, you may go through a rough cycle
3) What are the red flags in AR/VR?
The red flags don’t just exist in AR & VR, they exist in all technology innovations. To me, there is two type of start-ups. One that innovates and is first to risk and the other who follow the idea and try to fine tune it. My advice is: if you are the latter type you will be entering the market with a competitor who has had the first footing and probably the first the Investors saw or heard from. If you are the latter type, you need to have patience and think deeper in bringing your USP to the light to get the traction you need and this could take time. If you are the former type, you have the advantage being the first to market, however, it still may not be that easy as you need to prove your idea
Overall the panel was amazing and the questions thrown at us were great , the biggest takeaway for the entrepreneurs from this session I feel was ” let us know what you need and how you will change the world with technology”, unless that is clear , it will be an uphill battle for the start-ups and a challenge for investors .
My recommendation to all start-ups- as technologies emerge we will see many more new and innovative ideas — believe in your dream, because only then you show the passion and depth of your ideas which have a higher chance of visibility and acknowledgment
“For those who believe in their ideas, will find a way to make it happen”…..
I am an Entrepreneur and an advisor to a few select start-ups( in AR, mobile, semi and AI) for more than a decade. My passion is helping start-ups restructure and get them back on the growth path. Currently, working with a consumer product company and looking to help more companies…
Robots will take control. It’s a scary state of mind, but it depends on how humans will control it. Automotive, ADAS and self-driving systems are fast-growing markets that are already adopted by all the big players and brands in the car and high-tech industry. It’s not so obvious that the “big boys” of the car industry trust that solution and are already implementing it into new cars. Behind the decision stands a lot of thinking and questions to ask:
Is it safe?
What are the risks?
Is it cost effective?
Is it the best timing to go to market?
Do we really need it?
What are the real benefits?
Everyone heard about Mobileye’s solution, “Tesla”s self-driving car and google’s vision/ But behind the big names, there are a lot of types and solutions that we still didn’t hear about them, and new trends and technologies that are being developed in the back yard of the big companies. Automotive solutions combine inside several solutions that already exist such as, Computer vision, AI, AR, and IoT. What stands behind all the new buzz words, is it just a narrow point of view into the future where we as humans will try to lose control, We can’t answer everything here, but we need to understand the consequences of the new trends and technologies.
Everyone is trying to position themselves as innovative.
Toyota that is using an artificial intelligence agent, called Yui. Yui takes communicating with your car to a next level with the AI actually learning your habits and then responding with an in-car lighting, sound, and touch. Toyota believes that a simple “question and answer” based voice command system (like Siri) is outdated and the above AI technology is the future.
Ford and Amazon “Alexa” are collaborating and providing voice control system. Nvidia, known more for its excellent processing chips and graphic setups, has tied up with Audi to build the next generation of autonomous vehicles. They have announced they will showcase a self-driving car within 12 months. The two companies have already been working closely during the last one year to create this new AI car platform that will act as a base to future Audi products.
Eyesight, a sensing solution for in-car environment, tracks the driver’s attention on the road, detecting when the driver is distracted or is showing signs of drowsiness. Using deep learning and other machine learning tools, eyeSight’s automotive solutions addresses three main aspects of the driving experience:
Argus Cybersecurity – This Israeli-based company is tapping into an explosive new marketplace: Automotive Cybersecurity. Millions of new cars and trucks on roads today have Internet connectivity, allowing automakers to deliver a host of new services to its customers, including Wi-Fi hotspots and the ability to remotely unlock and lock a vehicle from a mobile app. But the increasing level of connectivity, too, is going to expose consumers and manufacturers to a new set of risks, if, or more likely when, hackers are able to find weaknesses and exploit them.
Polysync, founded in 2013, has developed a middleware platform that lets automakers and other autonomous vehicle startups test, gather data, and eventually deploy driverless vehicle applications without spending an inordinate amount of time and resources. The system is designed to turn software algorithms and sensors into plug-and-play applications.
Der Markt an Firmen die sich auf Virtual Reality spezialisiert haben nimmt deutlich zu. The Venture Reality Fund, ein Investorenzusammenschluss, der in Startups in der Early-Stage-Phase aus den Bereichen Virtual Reality, Augmented Reality und Mixed-Reality investiert, konnte ein Firmenwachstum von 40 % im Jahr 2016 verzeichnen.
Immenser Zuwachs an VR-Unternehmen
Das bedeutet ein immenses Wachstum für diesen noch neuen Wirtschaftszweig. Eine Gruppe an Unternehmen aus diesem Bereich stach dabei besonders hervor: Den größten Anteil stellten Firmen dar, die Content-Apps für Head-Mounted-Displays herstellten. Der Spiele- und Unterhaltungsbereich im Speziellen habe sich zudem mehr als verdoppelt. Doch innerhalb dieses Branchenzuwachs finden sich nicht nur Neugründungen. Auch die bereits etablierten Big Player unter den Unternehmen verstärken ihre Bemühungen um auf dem Markt Fuß zu fassen.
B2B, Bildung, Gesundheit und Journalismus sind Wachstumsmotoren
Obwohl man bei den Verkaufszahlen bei VR-Headseats – im Vergleich zu anderen Wirtschaftszweigen – noch von einem gemächlichen Start sprechen kann, schaffen es geschickt aufgestellte Independent-Entwicklungsstudios, durchaus schwarze Zahlen zu schreiben. Diese Entwicklung sei laut VR-Fund durchaus global zu verstehen. Dabei haben traditionelle Investment-Unternehmen aus den USA bereits den wirtschaftlichen Wert von Virtual Reality und Augmented Reality erkannt und fördern vermehrt die Entwicklung von VR-Content und Apps. In Asien hingegen entwickelt sich vor allem ein großer VR-Spielemarkt, den die Big Player der Branche tatkräftig unterstützen.
Darüber hinaus entwickeln sich der B2B-Sektor, der Bildungsbereich, die Gesundheitsbranche und der Journalismus immer mehr zu Wachstumsmotoren für VR, AR und Mixed-Reality. Insbesondere konnten Unternehmen, die sich auf Werbung und Marketing sowie auf Analyse spezialisiert haben, Investorengelder einsammeln. Besonders gefragt ist bei Unternehmen die Entwicklung von VR-Content und Managementsystemen als VR-App oder als 360-Grad-WebVR-Erlebnis für den B2C-Bereich. Aber auch die Entwicklung von 3D-Audio konnte eine erhöhte Nachfrage verbuchen. Insbesondere das immer mehr Marktführer wie zum Beispiel Microsoft – das unter anderem in Kooperation mit HP oder Lenovo eine eigene VR-Plattform etabliert – auf den Markt vorstoßen, könnte bereits in naher Zukunft dafür sorgen, dass sich VR im Mainstream etabliert.