Remember those mixed reality combat goggles that Microsoft was building for the Army that made soldiers nauseous? (See our previous story here.) Well, they’re back.
Microsoft got its hands slapped and had to go back to the drawing board after reports came out last year that its new AR goggles were making soldiers literally sick to their stomachs. But it seems they’ve fixed the issues, and are ready for round two.
The Army just placed a new order for the Integrated Visual Augmentation System or IVAS goggles, Bloomberg reports.
Microsoft sent the Army some new prototype headsets this summer. The company apparently fixed the issues that caused headaches, nausea and pain.
The Army spokesperson said the new headsets showed “improvements in reliability, low light sensor performance, and form factor.”
I’m sure Microsoft was sweating bullets about this contract, since its consumer AR efforts seem to be dying on the vine. Apple has been grabbing all the hype with its upcoming mixed reality headset — but at $3,500 a pop, I don’t know how much traction those headsets are going to get, either. Which just leaves Meta and the Quest 2.
Meanwhile, Microsoft laid off a bunch of the HoloLens team earlier this year.
The next steps for IVAS include adding in cloud computing, the Army Times reports. This will let soldiers download apps for specific mission needs.
The Army wants to avoid overloading IVAS by offloading apps to the cloud instead of the device. During testing, soldiers used the goggles for assault planning, mission practice, targeting, and more.
IVAS lets them ditch the sand table to quickly scout and rehearse missions virtually.
Rather than an MRE-box sand table, a unit could virtually “see” the terrain in their heads-up display and rehearse a mission in their patrol base before leaving the wire,” Brig. Gen. Christopher Schneider told Army Times.
“Now we have to make this system producible and affordable,” he added.
Earlier issues around night vision, size, and weight are getting fixed bit by bit. The goal is to nail down cost and manufacturing in 18 months.
If all goes well, IVAS could start hitting units by 2025. Of course, that’s assuming the cloud tech actually works as advertised. And that Congress keeps funding the project.
IVAS has night vision, heat sensors, 3D mapping and other HoloLens features. It’s meant to give soldiers more awareness by layering digital info onto the real world, the company said.
To get input, Microsoft engineers did mock bootcamps in 2019, where they learned skills like navigating at night. This helped them design IVAS to handle tough conditions soldiers face.
After soldiers tested IVAS for around 80,000 hours by early 2021, Microsoft had a headset ready for combat use.
I’m not sure why they missed the whole nausea thing the first time around. Maybe the engineers had been using the headset so much themselves, during the whole development process, that they were used to it? Or it was so much better than the early iterations, that the nausea didn’t even register as a problem any more?
The annual OpenSimulator Community Conference, known as OSFest, will take place September 15 through 17 inside the OSFest virtual world grid. The three-day event features live music, exhibits, talks, tours, social spaces, and shopping for attendees across the hypergrid metaverse.
This year’s OSFest offers 94 hours of presentations, panels, tours, and workshops as well as 98 hours of live music performances across four stages. Participants can explore themed exhibit builds, shop merchant stores with special deals, and socialize at parties and dance events. There is no charge to attend OSFest.
To attend OSFest 2023, users simply need an account on any hypergrid-enabled OpenSim grid and use the map to teleport over. The hypergrid address is grid.opensimfest.com:8002:hg-welcome.
“We’re excited to welcome the extended metaverse community and hypergrid travelers to this year’s OSFest,” said lead organizer Shelenn Ayres. “It’s an immersive way to connect, have fun, and experience the breadth of creativity in virtual worlds.”
The event builds on the success of previous OSFests held annually since 2017. Each year, exhibits, stages, and infrastructure are created anew by volunteers. OSFest also relies on contributions from sponsors to cover basic hosting costs for the free event.
“The generosity of sponsors and effort of volunteers is what keeps OSFest going every year,” Ayres said.
The full schedule, performer lineup, and interactive OSFest portal with information on all exhibits, presentations, and activities can be found at OpenSimFest.com.
Organizers recommend picking up a landmark box after arriving to help navigate the sprawling regions. With a theme of “Jazz Through the Ages,” costumes matching 1860s to 1970s styles are encouraged but not required.
AviTron owner Alexander Pomposelli has a long history of closing grids without warning. Back when he ran AviWorlds, I counted more than a dozen times that he closed that grid, often without any warning. Residents complained of losing access to regions, inventories, and in world-currency balances.
At one point, he finally gave up, and turned AviWorlds over to former business partner Josh Boam — then started AviTron. But before he launched AviTron, he briefly ran the Virtual Ville grid, which he shut down without warning in October of 2020.
Well, AviTron may be closing at the end of the month.
“Yes, I am closing AviTron at the end of the month,” Pomposelli told Hypergrid Business. “AviTron is a business and it lost over 80 percent of its revenues… After three years online I couldn’t make it legit. Lost money every month.”
Residents have until August 28 to take their content from AviWorlds and move it to other grids, he said. To enable this process, AviTron will become hypergrid-enabled. The hypergrid address is e avitronlogin.avitron.net:8002.
However, any currency reserves that residents had on the grid will not be exchangeable, he said.
Residents learned about the closure on AviTron’s Facebook page.
“Private region owners, the ones left are now leaving one by one and now premium account holders have also been leaving. Not paying on time, not paying at all,” Pomposelli wrote in a Facebook post yesterday.
Several people sent me copies of that announcement last night.
Then, this afternoon, Pomposelli contacted me again.
“I may just cut costs and keep it online,” he said. “I am studying it. But it cannot be the way it is now.”
The yo-yo grid
Back when I was co-hosting the Inworld Review with Mal Burns, Burns referred to AviWorlds as the “yo-yo grid” because it kept going up and down.
Today, AviWorlds, under new management, has been very stable and has recently passed its 5,000th registered user.
So maybe it’s not so much the grid that’s the yo-yo grid as the owner.
This is just my opinion, but, based on 14 years of covering OpenSim, it doesn’t help when your business model changes month-by-month and residents never know what to expect.
OpenSim grids typically fall into one of two categories. On the one hand, there are commercial grids. They offer reasonably-priced land and good service and are careful about managing their technology and expenses. Several have been up and running for years, with little or no downtime, by providing consistent value and service.
Other grids are run by groups, non-profits and individuals and rely heavily on volunteers. They might break even on costs, or accept donations, or just cover the server expenses themselves because OpenSim is cheaper than renting land on Second Life. Roleplaying groups and educational institutions, for example, can get a lot of inexpensive land in OpenSim and a great deal of control when they run their own grids. These grids often offer free or subsidized land to users, employees, or community members.
Commercial grids rarely offer free land and, when they do, it’s usually smaller parcels designed for residential use or for setting up stores.
Grids also typically pick whether or not they will be hypergrid enabled and then stick with that decision. Schools and private role-playing grids, for example, might choose not to be hypergrid-enabled because they want to ensure the safety of their students or to secure proprietary content.
General-purpose grids tend to hypergrid-enabled, allowing users to teleport in and out. This means that people don’t have to create new accounts in order to visit the grid, and allows people to send messages to friends on other grids. It also makes it easier for residents to get content deliveries from the Kitely Market, or to go shopping on other grids. As of mid-July, more than 98 percent of all public OpenSim users were on hypergrid-enabled grids.
AviTron is an exception. The grid has changed business models several times, with Pomposelli changing his mind on whether to allow hypergrid travel, or whether users can export content — such as their own creations — from the grid. He’s also experimented with NFTs, cryptocurrency, free land, and paying users to spend time on the grid. He’s also changed currency providers. AviTron has also experienced some downtime, including a protracted outage in 2021.
It seems that he can’t decide whether he’s running a personal grid, paid for out of his own pocket, or a commercial grid sustained by a steady revenue stream.
He also claims to have made money from Google Ads. In his Facebook post, he said he was seeing $350 a month in advertising revenues. This is a very odd statement — you’d need to have at least 10,000 pageviews a month, and usually a lot more, to make any significant money from ads. We certainly weren’t getting anywhere near that when we had ads up on Hypergrid Business, and we typically have over 30,000 pageviews a month. OpenSim’s total active user base is only 43,000 a month.
In general, grid websites don’t see much traffic. People come once or twice — to learn about the grid, and to open an account. Then, after that, all their interactions are in-world. Even the most popular grid wouldn’t see more than a few hundred visitors a month unless they had a particularly active forum or blog. AviTron did not.
Warning to future AviTron users
If AviTron does stay alive — or closes and then comes back from the dead — I strongly urge users not to invest more time and money than they can afford to lose.
If AviTron is hypergrid-enabled, I recommend that people base their primary avatar on another, more stable grid — such as OSgrid, Kitely, or DigiWorldz.
And if you need free land, get a free OSgrid region that you run on a home computer, or download the DreamGrid installer. If you own your own region, you can save it as an OAR file at any time. This is what I recommend for content creators, whether in OpenSim or Second Life — do your building and creation on a private, personal region or grid, then upload the creations to the commercial grids where you plan to use or sell them.
AvatarLife is launching its Wild Poker game today, based on Texas Hold’em poker.
But it’s more skill-based, said AvatarLife CEO Sushant Chandrasekar.
There is also a starting jackpot of 1 million AV$, the grid’s local currency — which translates to about US $4,000.
“The launch event will be a three days festival with freeplay poker and contests, other skill games contests and a special disco,” Chandrasekar told Hypergrid Business.
The event begins at 10 a.m. Pacific time on July 21 on the grid’s AvatarLife Games region.
New research reports and surveys released this month show that interest in virtual and augmented reality is continuing to drop.
According to an EY Consulting survey released earlier this month, only 24 percent of people said their company has started using VR and AR technologies, putting it in last place among all technologies people were asked about. Those other technologies included cloud and edge computing, IoT, digital twins, quantum computing, biometrics, blockchain, and generative AI. And that’s with quantum computers not even available on the market yet and generative AI only really becoming accessible to the world late last year.
Similarly, even among people who were familiar with VR technology, only 15 percent said that they wear a VR headset at work, only 17 percent said they attend meetings in the metaverse, and only 18 percent use VR for onboarding or training. The survey didn’t ask if they did those things on a regular basis, or tried them once and stopped.
According to a report by research firm IDC, global shipments of AR and VR headsets dropped sharply this year — a decline of 54 percent compared to the same time in 2022.
The only growth was in augmented reality displays — the kind with transparent lenses, where you can see the real world, just with a holographic overlay over it. The top example of this are the Air AR glasses from Xreal. They can project a TV screen or computer monitor into the air in front of you, and look just like sunglasses. You connect them via a USB-C port to your smartphone or tablet, or to a PC or gaming console. You can even order them with prescription lenses. And they cost just $379 — still a little on the high side but about as much as you’d pay for a second monitor, and only about a tenth of the price of Apple’s yet-to-be-released Apple Vision Pro headset.
Xreal Air AR glasses. (Image courtesy Xreal.)
Right now they only come in black and look a little clunky. But, in general, this is what I expected the Apple headset to be — a replacement display for the iPhone screen that was as easy to use as a pair of sunglasses. Add a few design options, a Bluetooth connection, and a case that doubles as a charger and I’m sold — especially if the resolution of the display is good enough to read text.
Oh, and I want the kind of lenses that automatically turn lighter or darker when you want them to. If they could replace my regular glasses, I could just wear them all the time.
When you go inside the house, it’s basically a typical real estate or museum tour — click on the arrows to teleport around, then look in various directions. For the most part, it’s walls with pictures of items from the J. Crew catalog. A bit less pleasant to experience than the paper version, and a lot less convenient than its regular online shopping experience.
Frankly, it reminded me a bit of in-world stores in Second Life and OpenSim.
And I’m not the only one who wasn’t impressed.
According to a YouGov survey, 45 percent of J. Crew customers don’t see any practical applications for augmented or virtual reality.
The survey also shows that 67 percent of the retailer’s customers think that augmented or virtual reality allows people to experience products and services before they buy them. Since that statement is technically true — augmented and virtual reality does allow that, for a certain definition of the word “experience” — I’m surprised that the answer wasn’t 100 percent.
The thing is, many retailers are already adding virtual models to their websites, so you can see what the clothes might look like on you, or on a model that’s shaped like you. No AR or VR required.
In other words, AR and VR have all the inconveniences of physical stores — limited selection, hard to find what you’re looking for — with none of the benefits like, say, being able to feel the fabric, checking that the shoes or clothes don’t pinch or itch, or buying a Cinnabon in the food court after you’ve finished shopping as a reward for surviving the ordeal.
In April, I wrote that I had high expectations for Apple’s new augmented reality headset — and that I was looking forward to switching back to the iPhone if it was what I hoped for.
I was very much disappointed by the actual announcement on Monday.
You can watch the headset part of the presentation below:
The price tag. OMG, the price tag
Really? $3,500? Really? I’m a writer, so I’ve owned cars that cost less than that.
If we assume that prices will drop in half every year, it will take four years to get down to what I would consider a reasonable price — around $200. Unless, of course, it can work as a phone replacement, in which case, it might be down to a reasonable $875 in two years.
Not available until next year
Did I say two years? I meant three years — because this thing won’t be available until 2024.
Which means if it’s an add-on, and not a phone replacement, it will take five years to get down to a reasonable $200.
The size. Look at the size.
This thing is huge. I want my augmented reality headset to be a pair of sunglasses. Especially since this thing is connected by a cord to a battery pack you wear. If it’s connected by a cord to something anyway, might as well connect it to a phone — or a pack that has the processor in it, so that the headset itself can be a lot smaller.
Also, this feels like it’s meant to be a work productivity tool. That means that the cord could be connecting it to a computer. Again, you can move the chips out of the headset itself and make it lighter.
The creepy eye display.
When you look at someone wearing one of these headsets, you’ll be able to see their eyes and where they are looking. Not because the display is transparent — but because the entire outside surface is a display screen that shows you a video of the person’s eyes.
It is super creepy. To me, at least.
(Image courtesy Apple.)
Also, it seems like such a waste of processing and display just to show a pair of eyes. If you want to have a headset where you can see the user’s eyes, just have a clear-glass headset where you can see the user’s eyes.
The pass-through camera.
And clear glass goes the other way, too.
As far as I’m concerned, in order to use an augmented reality headset for anything, you need to have clear lenses. That way, you can see your surroundings, with the augmented reality stuff as an overlay on top.
This is what I thought the Apple headset was going to look like:
(Image by Maria Korolov via Midjourney.)
Plus, when clear-glass headset is turned off, you can still use it as a regular pair of glasses or sunglasses. I currently wear glasses. Being able to have them do double-duty as a phone display screen would be excellent.
Instead, Apple decided to make the headset opaque, and to use cameras to try to trick you into thinking that you’re seeing out of them to the room around you.
Now, according to Mike Rockwell, head of Apple’s AR/VR team, their headset chips are so good that it “virtually eliminates lag.”
It’s that “virtually” that gets to me. Virtually? So there will be lag between what happens around me, and what I’m seeing in the headset? That’s the kind of fake augmented reality I hate in the headsets I already own, like Meta’s Quest.
I really wanted to have transparent lenses — like the old Google Glass headset, but better looking, and with better functionality and usability.
Now, maybe Apple will be able to fully eliminate lag by the time people start to actually use the headset three to five years from now, but, today, I’m disappointed.
Even better, when I actually buy this thing they will have figured out a way to use clear glass. The technology is there already — glass that can programmatically go from opaque to transparent and show projected images on it.
Or they’ll have made the cameras so responsive that any lag is completely eliminated, not just “virtually.”
Not a phone replacement
Because this is a bulky headset with short battery life and a cord — and not a pair of sunglasses you can whip on and off — this is not a headset that is going to replace your phone screen.
And if it doesn’t replace my phone, then I’m not going to be using it all the time. Which means I’ll be using it the way I use my VR headsets today — infrequently. And when I use things infrequently, I forget how they work between sessions.
This means that I’m reluctant to use the headset instead of, say, just a regular Zoom meeting. Which means I use it even less often, until, eventually, it just sits there gathering dust on my shelf. I’m not about to pay $3,500 for a paperweight.
I used to think that the path to the metaverse started with screen-based virtual worlds then expanded to virtual reality. At some point, I thought, we’d all be doing everything in the metaverse. The same way that the Internet made information instantly accessible to everyone everywhere, the metaverse would do the same with experiences and human interactions.
I spent over a decade of trying to make it happen for myself and my team. We had an in-world office. I started a group for hypergrid entrepreneurs that met in OpenSim. I am on the OpenSim Community Conference organizing team, and our early meetings are in OpenSim. I even figured out a way to get my desktop and many of my apps into OpenSim, so that I could work in my virtual office.
Spoiler: I did not, in fact, ever do any significant amount of work in my virtual office.
Here I am at my desk in my old virtual office.
I still think it’s possible. Well, theoretically possible, at least.
Eventually. But not in the immediate future, and not with the technology we have today.
First, until the resolution of a virtual world is as good as real life, there will be an advantage to working the old-fashioned way, especially when you’re in a graphics-heavy profession. I’m not an artist, but I do create graphics to go with blog articles and social media posts. And I’m the one responsible for web design for several outlets, including Hypergrid Business, MetaStellar, Writer vs AI, and Women in VR. That’s hard to do on a screen in a virtual world.
And don’t even get me started on trying to work in virtual reality. Even typing is hard if you can’t see the keyboard. I touch type, but sometimes I have to type special characters. I never remember where any of them are. In addition, I multi-task. I have several windows open at once and am cutting-and-pasting between them, looking things up, using calculators and other tools, and, of course, checking my phone. I can’t do most of that in virtual reality, even with a pass-through camera. And if I’m just going to be sitting at my computer, typing, why am I in a virtual reality headset, anyway?
But AR — augmented reality — well, that’s something entirely different. Instead of replacing the entire world around you with a virtual one, augmented reality just adds a little bit of the virtual to the actual world around you. Instead of looking out at the world through a distorting pass-through camera, you see the world as it is.
What this means is that, instead of a Zoom call, I can see the person I’m talking to sitting in front of me as a basically a hologram. Well, they’d be projected on my glasses, but to me it would look as if they were actually there. Instead of a screen full of little Zoom faces, I could see people sitting around a conference table. I’d have to rearrange my home office so that my layout would work for this, but I had to rearrange my office anyway, so that it would look good on Zoom.
The thing is, we’re probably going to get to AR through our phones. Instead of wearing a smart watch, we’d wear smart glasses and just keep our phones in our pockets. Until the phones got so small, of course, that they’d fit completely inside the glasses frames.
The home screen of my phone is currently — blessedly! — ad-free, so I don’t expect to see pop-up ads just showing up willy-nilly in augmented reality, either. If they did, nobody would use the platform. Instead, we’d probably see ads the same places we currently see them — when we play free games and scroll through news feeds.
I can see some very interesting things happening when we get AR glasses. We’d use virtual keyboards instead of physical ones, and probably dictate quite a bit more, too.
I do like the physical feel of a tactile keyboard, but we already have Bluetooth-enabled keyboards that sync to our mobile devices, so I can easily see continuing to use one, if I prefer.
But I can also see myself dictating more.
Speech detection is getting more accurate all the time. In fact, I’m already dictating most of my text messages because it’s so convenient. And instead of physical screens, we’d get virtual screens that float at an arm’s length in front of us, and we can position them where we want them, and have as many screens as we want, of any size. I only have a couple of apps that I use that don’t run on a phone — GiMP and Filemaker. Everything else I do, including word processing, is browser-based, so I can already do it on a phone.
An AR phone will make my desktop PC, monitors and keyboards and backup laptop obsolete. Well, I’d still keep my laptop, just in case, but my other hardware will go the way of all the other devices that smartphones relegated to the trash heap of history. And, also, to literal trash heaps. And Windows. I hate Windows, and will be happy to never use it again.
Since these AR smart glasses will be so convenient, everyone will be using them for everything. We’ll be living in a world that has a continuous virtual overlay on it, a magical plane that gives us superpowers.
(Image by Maria Korolov via Midjourney.)
Oh, and our AI-powered virtual assistants who are as smart as we are, or even smarter, will live inside this virtual overlay.
All the pieces are already there — including the intelligent AI. All it will take is for someone to put them together into an actually useful device.
I’m guessing that this will be Apple. When it happens, I’ll be switching back from Android the first chance I get. I originally had an iPhone, but switched to Samsung when Gear VR came out because Apple didn’t support VR. Then I switched to the Pixel because I hated Samsung so much, and because I liked Google’s Daydream VR platform.
Both Gear VR and Daydream are now gone, though Google Cardboard remains. I still see between 3,000 and 4,000 pageviews a month on my Google Cardboard headset QR Codes page. These are the codes that people use to calibrate their Google Cardboard-compatible headsets. They’re ridiculously bad, and have limited motion tracking, but as phone screens get better, the image quality has become pretty good — good enough to watch movies on a virtual screen, and, of course, for VR porn. Ya gotta admit, porn does drive technology adoption. I’ve heard.
But the phone-screen-based approach seems to be hitting a head end, since few people want to have a huge phone screen strapped to the front of their face.
I’ve been waiting for years for Apple to do something in this space.
“If you think about the technology itself with augmented reality, just to take one side of the AR/VR piece, the idea that you could overlay the physical world with things from the digital world could greatly enhance people’s communication, people’s connection,” Cook says. “It could empower people to achieve things they couldn’t achieve before. We might be able to collaborate on something much easier if we were sitting here brainstorming about it and all of a sudden we could pull up something digitally and both see it and begin to collaborate on it and create with it. And so it’s the idea that there is this environment that may be even better than just the real world—to overlay the virtual world on top of it might be an even better world. And so this is exciting. If it could accelerate creativity, if it could just help you do things that you do all day long and you didn’t really think about doing them in a different way.”
He didn’t deny or confirm the release of an Apple AR headset.
But, yesterday, Bloomberg reported that Apple is getting ready to unveil its augmented reality headset this June, and is already working on dedicated apps, including sports, gaming, wellness, and collaboration.
Here’s what an AI thinks that the new Apple headset might look like:
(Image by Maria Korolov via Midjourney.)
I do love my Pixel, and I’d have to replace all my Android apps with new iPhone ones if I switched, but if we’re about to hit an iPhone moment with augmented reality, I want to be first in line.
Virtual reality hasn’t caught on with American teens, according to a new survey from Piper Sandler released on Tuesday.
While 29 percent percent of teens polled owned a VR device — versus 87% who own iPhones — only 4 percent of headset owners used it daily, and just 14 percent used them weekly.
Teenagers also didn’t seem that interested in buying forthcoming VR headsets either. Only 7 percent said they planned to purchase a headset, versus 52 percent of teens polled who were unsure or uninterested.
That’s not the only bad news for VR that’s come out recently.
Bloomberg has reported that Sony’s new PlayStation VR2 Headset is projected to sell 270,000 units as of the end of March, based on data from IDC. It had originally planned to sell 2 million units in the same time period, Bloomberg reported last fall.
In fact, VR headset numbers in general are down.
According to IDC, headset shipments declined 21 percent last year to 8.8 million units.
“This survey only further exemplifies that the current state of VR is very business-focused,” said Rolf Illenberger, managing director of VRdirect, a company that provides enterprise software solutions for the metaverse and virtual reality.
“The pandemic further accelerated progress for VR and AR usability in the office, while the release of new devices will mean more for developers building practical use cases than they will for teenagers seeking entertainment,” he told Hypergrid Business.
But that might be wishful thinking.
According to IDC, both consumer and enterprise interest in virtual reality fell last year.
Earlier this year, I wrote about how Microsoft and other companies have pulled back on their VR and AR plans. And the bad news has continued to come in.
Even Meta’s Mark Zuckerberg seems to have soured on the metaverse. In his March letter announcing a “year of efficiency” and layoffs of 10,000 people, Zuckerberg said that the company was now going to focus on AI.
“Our single largest investment is in advancing AI and building it into every one of our products,” he wrote. So much for the metaverse being Meta’s biggest investment. In 2021 and 2022, Reality Labs — its metaverse division — reported a total loss of nearly $24 billion.
Given the explosion of interest in AI since ChatGPT was released late last year, and its clear and obvious business benefits, I have serious doubts that anyone is going to be investing much in enterprise VR this year.
After all, generative AI is clearly poised to solve a multitude of business challenges, starting with improved efficiencies in marketing, customer service, and software development. And virtual reality continues to be a technology in search of a problem to solve.
I’m a huge, huge fan of OpenSim. But, other than giving a presentation at the OpenSim Community Conference in December, I can’t remember the last time I went in-world for a meeting. It’s all Zoom, Zoom, Zoom, and occasionally Microsoft Teams.
Oh, and here’s another downer. I watched the Game Developers Conference presentations from Nvidia, Unreal Engine, and Unity. I don’t play video games much, other than on my phone, so I hadn’t noticed just how amazing graphics, environments and characters have become. I originally watched for the AI announcements, which were insane, but the realism of the visuals just blew me away. I’m feeling the urge to run out and buy a gaming console.
(Image courtesy Unreal Engine.)
Now, general purpose platforms like OpenSim don’t have to have the same level of graphics to be successful. The early web, for example, had very poor graphics compared to what was available from commercial add-ons like Flash. And look at Minecraft — you can’t get any worse than that, graphics-wise.
So while the graphics were awesome, that’s not why I was most concerned. No, I was looking at the AI-powered environment generation features. And it’s not just Unreal and Unity. There are a bunch of AI-powered startups out there making it super easy to create immersive environments, interactive characters, and everything else needed to populate a virtual world.
With the basic Unreal and Unity plans available for free, is it even worth it for developers to try to add these AI features to OpenSim? It might feel like putting a jet engine on a horse-drawn buggy. I mean, you could try, but the buggy would probably explode into splinters the minute you turned it on.
Am I wrong?
Will we be able to step into OpenSim and say, “I want a forest over there,” and see a forest spring up in front of us? Will we be able to have AI-powered NPCs we can talk to in real time? And will we be able to create interactive and actually playable in-world experience beyond just dance-and-chat and slot machines?
There’s good news, though.
AI tools are helping to accelerate everything, including software development and documentation. With the big players pulling back from enterprise VR, this gives an opportunity for open source platforms like OpenSim to use those tools, grab this window of opportunity, and catch up. Maybe even take the lead in the future hyperlinked, open source, interconnected metaverse.
Second Life, the popular virtual world platform that has been around for over two decades, is finally releasing a mobile viewer. The new mobile viewer is currently a work in progress, but the Second Life development team has shared a sneak peek at what users can expect.
Watch the preview in the video below:
The focus of the development team has been on delivering full rendering of avatars and 3D environments, and they have achieved impressive results so far. The mobile viewer is based on Unity and will be available on both Android and iOS platforms, allowing users to access Second Life from their mobile devices.
“We’ve started our development work with some of the most challenging aspects first…the full rendering avatars with all their complex attachments and behaviors as well as the full red ring of 3D environments that are so critical to the Second Life experience,” said Andrew Kertesz, Linden Lab’s VP of engineering, who is also known as Mojo Linden.
It is important to note that there is still no news about a web-based viewer for Second Life, which, in my opinion, is a bit more important since it makes it easier to invite new users to visit you inside the platform. Right now, the existing viewers have steep learning curves and it doesn’t help that the movement and camera controls don’t match those of popular video games.
Still, Second Life continues to be popular with its user base, despite the lack of significant innovation over the past twenty years.
Maybe a mobile viewer will get some former users to come back and revisit the platform, but it remains to be seen what impact it will have on the popularity and growth of Second Life.
“Over the past nearly two decades I have seen Second Life enable people from all corners of the globe to create socialize experiment engage in education business or even develop relationships,” said VP of product operations Eric Nix, also known as Patch Linden in-0world. “Imagine being able to stay connected with your Second Life from anywhere chat with friends visit your favorite in-world hangout spots and later do pretty much anything you can do with the desktop Second Life viewer without being tethered to your computer.”
The Second Life team has promised to keep users updated on the progress of the mobile viewer, but the beta version is expected to launch later this year.
I can’t tell if he’s just being tone deaf, or trying desperately to do some damage control, but after releasing ChatGPT without any warning on an unsuspecting world late last year, OpenAI CEO Sam Altman is now calling for slow and careful release of AI.
If you remember, ChatGPT was released on November 30 of 2022, just in time for take-home exams and final papers. Everyone started using it. Not just to make homework easier, but to save time on their jobs — or to create phishing emails and computer viruses. It reached one million users in just five days. According to UBS analysts, 100 million people were using it by January, making it the fastest-growing consumer application in history.
And according to a February survey by Fishbowl, a work-oriented social network, 43 percent of professionals now use ChatGPT or similar tools at work, up from 27 percent a month prior. And when they do, 70 percent of them don’t tell their bosses.
Last week, OpenAI released an API for ChatGPT allowing developers to integrate it into their apps. Approval is automatic, and the cost is only a tenth of what OpenAI was charging for the previous versions of its GPT AI models.
So. Slow and careful, right?
According to Altman, the company’s mission is to create artificial general intelligence.
That means building AIs that are smarter than humans.
He admits that there are risks.
“AGI would also come with serious risk of misuse, drastic accidents, and societal disruption,” he said.
He forgot about the killer robots that will wipe us all out, but okay.
(Image by Maria Korolov via Midjourney.)
He says that AGI can’t be stopped. It’s coming, and there’s nothing we can do about it. But it’s all good, because the potential benefits are so great.
Still, he says that the rollout of progressively more powerful AIs should be slow.
“A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place,” he said.
Maybe he should have considered that before putting ChatGPT out there.
“We think it’s important that efforts like ours submit to independent audits before releasing new systems,” he added.
Again, I’m sure that there are plenty of high school teachers and college professors who would have appreciated a heads-up.
However, he also said that he’s in favor of open source AI projects.
He’s not the only one — there are plenty of competitors out there furiously trying to come up with an open source version of ChatGPT that companies and individuals can run on their own computers without fear of leaking information to OpenAI. Or without having to deal with all the safeguards that OpenAI has been trying to put in place to keep people from using ChatGPT maliciously.
The thing about open source is that, by definition, it’s not within anyone’s control. People can take the code, tweak it, do whatever they want with it.
“Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history,” he said. “Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.”
There is one part of the statement that I found particularly interesting, however. He said that OpenAI had a cap on shareholder returns and are governed by a non-profit, which means that, if needed, the company can cancel its equity obligations to shareholders “and sponsor the world’s most comprehensive UBI experiment.”
UBI — or universal basic income — would be something like getting your Social Security check early. Instead of having to adapt to the new world, learn new skills, find new meaningful work, you could retire to Florida and play shuffleboard. Assuming Florida is still above sea level. Or you could use the money to pursue your hobbies or your creative passions. As a journalist whose career is most definitely in the AI cross-hairs, let’s color me curious.