Avatar: Frontiers of Pandora takes us to the Western Frontier on December 7
NFT Avatar Startup Genies Raises $150m on $1 Billion Valuation
No matter which virtual world (or metaverse) you spend your time in they all need avatars, a way for players to digitally represent themselves whilst offering a way to creatively express one’s self. Last week, 3D avatar startup Genies announced a Series C investment of $150 million USD to expand upon its Web3 vision.

Founded in 2017 and boasting previous investors such as former Disney CEO Bob Iger, the round was led by private equity company Silver Lake, valuing Genies at a substantial $1 billion. The company says the funds will be used to grow its non-fungible token (NFT) avatar ecosystem, a platform that allows creators to build and mint their own avatars.
So what is an avatar ecosystem? Genies explain this as a collection of “avatars, fashion and collectibles, spaces and venues, and social experiences,” all of which are created by users. The eventual idea is that these could then be used across the metaverse.
“We believe avatar ecosystems are going to shape Web3 the same way that mobile apps defined Web2,” said Akash Nigam, CEO of Genies in a statement. “With every advancement of the internet, an expansive new region of entrepreneurial skill sets is born. In Web3, Gen Z avatar ecosystem builders are going to be the leaders of innovation and, through our creator tools, we strive to empower their wildest imaginations, ideas, and experiences as avatar creations.”

“We’re just trying to invest in the very best technology companies,” Egon Durban, the co-chief executive of Silver Lake, told New York Times’ DealBook. “Sometimes it’s a small company like this, and other times it’s huge, large companies that need to be transformed.”
This continues a series of major announcements from Genies over the past year. Back in May 2021, the company raised $65 million with a round featuring the likes of Dapper Labs, Polychain, and Coinbase Ventures. December saw Genies collaborate with Universal Music Group, whereby artists on the label created avatars of themselves.
Creators using the Genies platform build and mint these NFT avatars to then sell via the platform’s marketplace, the company taking a 5% cut from each sale. Currently, these tools are only accessible by invitation only, with a more public rollout expected this summer. For further updates on Genies, keep reading gmw3.
How To Make Quest Avatars – Edit, Create & Change A Meta Avatar On Quest 2
Looking to make an avatar to use across your Quest in experiences like Horizon Venues, Horizon Worlds or Eleven Table Tennis? Here’s how to make a Quest avatar.
On Quest headsets, there’s a couple of different kind of avatars. Apps made by third-party developers will sometimes use their own avatar system which you set up on a per-app basis inside that experience specifically — major examples of these include VRChat, Rec Room, Altspace, and Bigscreen. However, a number of developers are also adopting Meta’s system-wide avatars and integrating support into their own experiences, meaning you can use that one avatar across many apps with support.
For this guide, we’ll focus on just the Meta avatars – how to setup and edit your avatar so that it is ready to use across your Quest.
How To Make A Meta Avatar – Quest 2
The process is easy once you know how, but might be confusing to find at first.
Unlike other settings and features, Meta avatar creation is not in the Quest settings menu.
Instead, pull up the Quest toolbar and click on the little profile picture next to the status icons, pictured above.
Once you’re on the profile page, hit the Edit Avatar button, pictured above. If you’ve never made an avatar before, you might just see a blank avatar on the profile page.
After that, you’ll be greeted with the avatar edit screen, pictured above — the rest is pretty self-explanatory. Just make sure you save your avatar once you’re done.
Once that’s finished, your avatar will be used in supported experiences like Horizon Venues, Worlds and, soon, Eleven Table Tennis. Meta notes that “Your avatar is public.”
Editing Meta Avatar in Horizon Venues
It’s also possible to edit (or create, if you’re new) your Meta avatar in Horizon Venues as well. To do this, simply launch Horizon and teleport over to the mirror in the starting room.
This version is a little different from the Quest home editor, but achieves the same objective. Any changes you make here will be saved to your Meta avatar across all experiences.
It doesn’t matter which editor you use, it’s just down to personal preference — the Venues editor does give you a bit of a better close-up look at your avatar in real time though, so it has that going for it.
Need any other help with your Quest headset? Check out our New To VR? page for more guides and helpful info.
Avatar: Frontiers of Pandora is Ubisoft’s big E3 shocker
Epic Games MetaHuman Creator Now In Early Access
The MetaHuman Creator from Epic Games is now available in early access.
MetaHumans are Epic’s attempt to bring lifelike humans into video games. As of today, the MetaHuman Creator is now available as part of an Early Access program, meaning that anyone can create their own create their own 3D human avatar for use in Unreal Engine.
Epic unveiled the MetaHuman Creator in February, which presented fully rigged, high fidelity virtual character models that can be used in real time in Unreal Engine. The character models, of which there were two available in the announcement demo, are incredibly high detail and look stunningly realistic.
The hair is a particularly impressive element — in many video games, realistic hair presents quite the challenge due to its complex and dynamic nature. The hair, including the facial hair, on the Epic MetaHumans looks much closer to real life than most standard video games models.
Shortly after the announcement, YouTuber ETR VR added VR support for the sample models released by Epic, allowing us to get up close and personal with the MetaHumans in VR. The models do take a bit of a fidelity hit in VR due to performance, but it’s still incredibly impressive. You can check out some footage captured back in February embedded above.
In addition to the two sample models that were available back in February, Epic is now providing 50 ready-made MetaHumans for anyone to download and use in Unreal Engine. They are available through Quixel Bridge, which should now have a dedicated MetaHumans section.
Will you be creating your own MetaHuman? Let us know what you think in the comments below.
LIV Now Supports Full-body Avatars from ReadyPlayerMe, Making it Easy to Stream VR Without a Green Screen

Many VR streamers use complicated mixed reality setups to show themselves from a third-person perspective inside the virtual world. LIV, a leading tool which makes this possible, now supports free, customizable, full-body avatars from ReadyPlayerMe, making it possible to stream your avatar inside of VR without the need for a green screen.
In addition to true mixed reality streaming, Liv has supported streaming with avatars for some time. However, actually finding a unique avatar for yourself was no simple task. Now, Liv has partnered with avatar maker ReadyPlayerMe to make it as simple as can be.
ReadyPlayerMe allows you to build a free full-body avatar—optionally based on a photo of yourself—in mere minutes. You can use the avatar as the character in select Liv-supported VR games, allowing stream viewers to see your movements in third-person.
Here’s an example of a ReadyPlayerMe avatar in Pistol Whip streamed via Liv:
What Sadie said! They have improved on them, they now are full body and support finger tracking and full body tracking! It's pretty smooth! pic.twitter.com/J8rY5UwWOo
— AtomBombBody (@AtomBombBody) January 17, 2021
Avatars from ReadyPlayMe are moderately customizable, and easy enough to get something you’re happy with relatively quickly, though we hope to see more customization options in the future (like height, build, and more control over outfits).

You can make your own ReadyPlayMe avatar to import to Liv right here. If you want to download your avatar for some other use, you can make one here and download it at the end of the process as a .GLB file for use in other applications.
Streamer Atom Bomb Body also has a detailed walkthrough for configuring Liv with your new avatar here:
The post LIV Now Supports Full-body Avatars from ReadyPlayerMe, Making it Easy to Stream VR Without a Green Screen appeared first on Road to VR.
Wolf3D Raises $1.3M to Further Support Its Cross-game Avatar Platform

Wolf3D, a Tallinn, Estonia-based 3D scanning and avatar company, has raised $1.3 million in its latest funding round, something it says will help further improve its cross-game avatar platform.
The investment round includes support by Trind Ventures, Presto Ventures, Koha Capital, Spring Capital, Contriber Ventures, and various angel investors. This brings the company’s overall funding to $2.8 million.
Called Ready Player Me, the company’s software is said to allow anyone to create “a personal full-body avatar from a selfie,” which critically aims to be platform and game agnostic. Wolf3D says it can support “many different art styles of avatars for all kinds of game styles and genres.”
The company notes that its latest cash injection will fuel “the next level with full-body personal avatars for games.”

Wolf3D has been developing 3D scanning tech since its founding in 2014. Six years later, the company is now making its avatar scanning technology available for small and mid-sized developers, as well as providing it to select companies for free.
The company admits that a monolithic ‘metaverse’ is probably not the way it will all shake out in the future, which is why its building cross-platform services. The long-term goal, the company says, is to make its avatar tech “a link between many different virtual experiences, adding them together into one big virtual world that you can explore seamlessly with your avatar and the same set of friends.”
Food for thought: the company’s apparent ambitions depend upon either platform holders or groups of individual developers to implement their system in the first place. While there’s no telling what the VR landscape will look like in the future, if Wolf3D can provide a ready-made solution that’s flexible and good enough, it’s much more likely that an acquisition is in their future rather than taking on the difficult job of stitching together the still largely fragmented digital world as it stands today.
The post Wolf3D Raises $1.3M to Further Support Its Cross-game Avatar Platform appeared first on Road to VR.
Hatsune Miku Is Getting Her Very Own VR ‘Amusement Park’ This Summer
Hatsune Miku is a vocaloid software voicebank, or in other words an entirely digital vocal performing artist, with a massive fan following in Japan. This summer, she is getting her very own VR ‘Amusement Park’ dubbed Miku Land Gate that you can visit for free using VirtualCast.
Even if you’ve never seen or heard a performance, chances are if you’ve spent any length of time on the internet over the past decade then you’re probably aware of Hatsune Miku. The character is represented by a teenaged girl avatar with turquoise twintail hair. Her name was created by combining the Japanese words for “first,” “sound,” and “future.”
To reiterate: she is not an avatar controlled by someone singing into a microphone, she is actually entirely digital. As a result, her voice is actually created using Yamaha vocaloid synthesizing technology.
From the sounds of it Miku Land Gate will be like a digital music festival you can visit inside VR. In addition to performances you’ll be able to watch alongside others while wearing VR headsets, there will be areas to explore and even merchandise to purchase.
Unsurprisingly, Hatsune Miku has already appeared across a wide range of VR games and apps such as this music rhythm game on Steam and on PSVR that’s appropriately titled Hatsune Miku VR.
Miku Land Gate will run from August 8th – August 10th 2020 inside VirtualCast. It’s a free event and you can learn more by visiting the official website here.
h/t: VRFocus
The post Hatsune Miku Is Getting Her Very Own VR ‘Amusement Park’ This Summer appeared first on UploadVR.
Facebook Research: 3D Body Reconstruction From Just One Camera
For the annual computer vision conference CVPR, Facebook is showing off an algorithm which can generate a fairly detailed 3D model of a clothed person from just one camera.
Facebook is the company behind the Oculus brand of virtual reality products. The company is considered a world leader in machine learning. Machine learning (ML) is at the core of the Oculus Quest and Rift S– both headsets have “inside-out” positional tracking, achieving sub-mm precision with no external base stations. On Quest, machine learning is even used to track the user’s hands without the need for controllers.
In a paper, called PIFuHD, three Facebook staff and a University of Southern California researcher propose a machine learning system for generating a high detail 3D representation of a person and their clothing from a single 1K image. No depth sensor or motion capture rig is required.
This paper is not the first work on generating 3D representations of a person from an image. Algorithms of this kind emerged in 2018 thanks to recent advances in computer vision.
In fact, the system Facebook is showing off is named PIFuHD after PIFu from last year, a project by researchers from various universities in California.

On today’s hardware, systems like PIFu can only handle relatively low resolution input images. This limits the accuracy and detail of the output model.
PIFuHD takes a new approach, downsampling the input image and feeding it to PIFu for the low detail “course” basis layer, then a new separate network uses the full resolution to add fine surface details.
Facebook claims the result is state of the art. Looking at the provided comparisons to similar systems, that seems to be true.

Facebook first showed off its interest in digitally recreating humans back in March 2019, showing off ‘Codec Avatars’. This project focused specifically on the head and face- and notably the avatar generation required an expensive scan of the user’s head with 132 cameras.
In May 2019, during its annual F8 conference, the company showed off real time markerless body tracking with unprecedented fidelity, using a model that takes into account the human muscular and skeletal systems.
Avatar body generation is another step on the path to the stated end goal; allowing users to exist as their real physical self in virtual environments, and to see friends as they really look too.

Don’t get too excited just yet- this kind of technology won’t be on your head next year. When presenting codec avatars, Facebook warned the technology was still “years away” for consumer products.
When it can be realized however, such a technology has tremendous potential. For most, telepresence today is still limited to grids of webcams on a 2D monitor. The ability to see photorealistic representations of others in true scale, fully tracked from real motion, could fundamentally change the need for face to face interaction.
The post Facebook Research: 3D Body Reconstruction From Just One Camera appeared first on UploadVR.