The total land area on OpenSim’s public grids reached the equivalent of 132,969 standard regions this month, an all-time high — and the third month in a row that OpenSim land area has broken records.
There was an increase of 1,089 standard region equivalents compared to last month. Meanwhile, the total number of registered users went up by more than 1,338. The number of active users fell, however, by over 2,100, partly due to grid outages. ProxyNet, for example, which reported over 400 active users last month, was down this month. And two other grids that reported drops of more than 500 active each may have had database issues, including Vivo Sim and Darkheart’s Playground.
I’m now tracking a total of 2,654 public grids, of which 308 were active, and 244 published their statistics this month. If you have a stats page that we’re not tracking, please email me at maria@hypergridbusiness.com — that way, your grid will be mentioned in this report every month, for additional visibility with both search engines and users.
Also, I’m no longer sending out a monthly email blast reminding OpenSim grid owners to send me news and updates for this report. If you have news, please email me before the tenth of the month if you want a short item included in this monthly wrap-up. For longer news, feel free to send me press releases at any time.
Our stats do not include many of the grids running on DreamGrid which is a distribution of OpenSim since these tend to be private grids.
OpenSim is a free open-source, virtual world platform, that’s similar to Second Life and allows people with no technical skills to quickly and cheaply create virtual worlds and teleport to other virtual worlds. Those with technical skills can run OpenSim worlds on their servers for free using either DreamGrid, the official OpenSim installer for those who are more technically inclined, or any other distribution, while commercial hosting starts at less than $5 a region.
Every month on the 15th — right after the stats report comes out — we will be sending out a newsletter with all the OpenSim news from the previous month. You can subscribe here or fill out the form below.
Top 25 grids by active users
When it comes to general-purpose social grids, especially closed grids, the rule of thumb is the busier the better. People looking to make new friends look for grids that already have the most users. Merchants looking to sell content will go to the grids with the most potential customers. Event organizers looking for the biggest audience — you get the idea.
All region counts on this list are, whenever available, in terms of standard region equivalents. Active user counts include hypergrid visitors whenever possible.
Many school, company, or personal grids do not publish their numbers.
I cover artificial intelligence at my day job. Every week, I talk to the experts building the technology and deploying it, and to companies already finding value in it. The AI-powered transformation is bigger than anything I’ve ever covered before, in my two-plus decades of technology journalism. And it’s moving faster than anything I’ve covered before. And, unlike some other tech trends, companies are almost universally already seeing value in it.
I’m not going to argue here about whether it’s good or bad — I’m going to save that for another essay. Neither am I going to talk, today at least, about the copyright issues and the job displacements and the potential destruction of civilization. Those are all real concerns, but let’s put a pin in them right now and come back to those later.
Today, I’m going to talk about AI and world building. If you build worlds — or want to get into the world-building business, either as a game designer, artist, or writer, or OpenSim creator — here are three ways generative AI will change everything.
Can AI build worlds?
Generative AI is bad. Often laughably bad. It can’t do hands. It’s attempts at writing code fail most of the time. We all know this, we laugh at it, we roll our eyes at people saying that AI is going to change anything except dupe dumb people into falling for even more stupid political spam.
Except — and this is super important — except that AI is learning continuously and evolving fast.
Let me remind you again how far image-generators came in just one year:
In 2023, the same thing happened with text. We went from silly little poems written by ChatGPT to AI writing part of an award-winning sci-fi novel. We point to something bad that AI has generated and pat ourselves on the back for being able to spot it so easily. Yes, we can spot bad AI. But we can’t spot good AI.
This year, we’re seeing the same progression happening with video. Remember Will Smith eating spaghetti?
Here’s today’s state-of-the-art, from OpenAI’s Sora model:
So what’s going to happen next?
First, AI is getting consistent. It’s getting a long-term memory. Early versions of AI couldn’t remember what they did before, so text and images and videos were inconsistent. Characters and backgrounds morphed. Stories went in crazy and contradictory directions. Today’s cutting-edge AIs have context windows of up to 10 million tokens. Yup, Google’s Gemini 1.5 model has been tested to accurately handle up to 10 hours of video or enough text for all of the Harry Potter books, seven times over.
Second, generative AI is going multi-modal. That means it’s combining video, audio, text and code into a single model. So, for example, it can write the text for a story, create a scene list for it, create a story book for it, create a video for it, and create audio for it, with the result being an entire coherent movie. Yeah, that’s going to happen. The tech companies already have preliminary models that can do most of this, including that Google AI I just mentioned.
Third — and this is the key part of it — the next generation of generative AIs will be able to simulate the world. OpenAI, said just as much in a research paper released shortly after its Sora announcement: “Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.”
Now, today’s models don’t fully understand physics. They don’t know how glass breaks, the direction of time, or that, say, mass is conserved. We can point at this and laugh and think that these models will never understand these things — just like they don’t understand the concept of human hands.
Well, some of the AIs have become really good at making human hands.
You might think that physics would be a bigger challenge. But Google, the company making Gemini, has all of YouTube to train it on. Plus, all our physics textbooks. And all the rest of human knowledge.
According to the OpenAI paper, developing accurate world simulators is mostly a question of making the models big enough.
From the researchers:
We believe the capabilities Sora has today demonstrate that continued scaling of video models is a promising path towards the development of capable simulators of the physical and digital world, and the objects, animals and people that live within them… We find that video models exhibit a number of interesting emergent capabilities when trained at scale. These capabilities enable Sora to simulate some aspects of people, animals and environments from the physical world. These properties emerge without any explicit inductive biases for 3D, objects, etc.—they are purely phenomena of scale.
The authors call this “emerging simulation capabilities” meaning that they appear on their own, without any specific training or interventions. And they list several emerging capabilities, including 3D consistency, long-range coherence and object permanence, and accurate physical interactions.
And it gets better. The authors say that its model is already able to create digital worlds.
Sora is also able to simulate artificial processes–one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”
What does this mean for creators?
Generative AI, like other technologies before it, is a force multiplier. If you can do something, you will be able to do more of it, faster, and, possibly, better.
If you can’t do something, it will give you the ability to do it.
For example, most of us can’t chop down a tree with our bare hands. Give us a knife, and it might take us a while, but we’ll eventually get there. With an axe — we’ll get there faster. With a chainsaw, we can chop down lots of trees. With a swing boom feller buncher you can cut down an entire forest.
I’m not saying that cutting down entire forests is a good thing. Or that you’d want a forest-clearing bulldozer accidentally rolling through your backyard. I’m saying that the technology gives you power to do things that you couldn’t do before.
Yes, we need laws and regulations about cutting down forests, and not letting bulldozers accidentally drive into people’s houses. And yes, these machines did reduce the number of people needed to cut down each tree. I’m not disputing that. All I’m saying is that these machines exist. And if you work in the timber industry, there’s a good chance the company you work with will be using them. And if you’re an individual, you’ll probably still be using your bare hands to pull up tiny saplings in your back yard, or gardening shears to trim bushes, or a chainsaw to cut down full-grown trees.
Similarly, generative AI will dramatically expand the tools available to people who create world for a living. You will still be able to do things the old way, if you want, but the companies you work for — and their customers — will increasingly start demanding them. And, if customers right now are saying things like, “no, never!” tomorrow they’ll be flocking to AI-generated landscapes, AI-powered interactive characters, storylines more intricate than anything possible today.
Future Tools tracks 38 different AI-powered tools for creating video games. TopAI has 70.
Google has released a preview of its own thing, an AI called Genie that automatically generates playable platform games.
Here are just some of the generative AI tools that are on their way, or are already here:
Terrain Generation: AI algorithms can procedurally generate realistic and diverse landscapes, including mountains, rivers, forests, and cities. This can save world builders countless hours of manual terrain sculpting and enable the creation of vast, detailed environments.
3D Asset Creation: Generative AI models can create 3D models, textures, and animations for objects, characters, and creatures. This could greatly expedite the process of populating worlds with diverse and unique assets, from furniture and vehicles to flora and fauna.
NPC Generation: AI can help create non-player characters (NPCs) with unique appearances, personalities, and behaviors. This includes generating realistic dialogue, responsive interactions, and adaptive quest lines. AI-driven NPCs could make worlds feel more alive and immersive. For OpenSim grids, NPCs could provide tours, answer questions, and help populate interactive stories.
Dynamic World Events: AI systems could be used to generate and manage dynamic events within the world, such as weather patterns, natural disasters, economic fluctuations, and political upheavals. This would create a more unpredictable and evolving world that responds to player actions. This could be especially useful for educational grids running simulations.
Procedural Architecture: AI could generate buildings, cities, and entire civilizations procedurally, complete with unique architectural styles, layouts, and decorations. This would enable the rapid creation of diverse and detailed urban environments. I think this could also be useful for building automatic themes for new grid owners. Today, many hosting companies offer starting regions. With generative AI, these regions can be redesigned quickly in different styles. At first, I don’t think this should be done in real-time — the environments will still need human tweaking to be livable. But, over time, the AI-generated stuff will be better and will increasingly be used as-is.
Localization and Accessibility: AI-powered tools could help automate the localization process, translating text, speech, and cultural references to make worlds accessible to a wider audience. AI could also be used to generate subtitles, audio descriptions, and other accessibility features. OpenSim grids have already been using automating translators, for example, with multi-lingual audiences. With generative AI, these tools just keep getting better and faster.
I personally don’t believe that these tools will hurt the video game and virtual world industries. Instead, they will put more power in the hands of designers — making games and worlds more interesting, more immersive, more detailed, more surprising. And bigger. Much, much, much bigger. And it will open up the industry more for indie designers, who’ll be able to produce increasingly more interesting games.
In the long term, at least.
In the short term, there will be disruption. Probably a lot of it. And during these tech disruptions in the past, the jobs lost aren’t the same as the jobs gained — creative jobs, in particular, take time to start paying off.
For example, when newspapers and magazines started laying journalists off after the Internet came along, most journalists found new jobs. Some moved to traditional outlets that were still hiring. Some went into marketing and public relations. A few found new media jobs. And some launched their own publications — they used this Internet thing and launched blogs and podcasts and YouTube channels. A few of them made money at it. But it took years for the new media to gain any respect and credibility and for people working in it to make any money.
In fact, many of the people who made it big in new media were not traditional journalists at all, but new to the field.
Sometimes, people who do things the old way don’t want to change. They don’t think it’s fair that their hard-won skills are no longer as useful. They think that they new ways are lazy or low quality. They might even think that it’s unethical or immoral to do things the new way. That people who, say, cancel their newspaper subscription and get their news online are morally bankrupt and that journalists who enable this are helping to destroy the industry. There are still journalists who feel this way.
We’re probably going to see something similar happening in the age of AI. New tools will pop up putting more power in the hands of more people — power to create art, music, software, video games, even entire books. And you won’t need to spend years learning these skills. Sure, the stuff they create will be bad at first, but will quickly get better as the technology improves, and the skills of people using the tools improve as well. Some of these people will make money at it. Most won’t. But, eventually, best practices will emerge. The sector will gain credibility — money helps. And, eventually, with the exception of a few curmudgeons, we’ll adapt and move on. It will become a non-issue — like, say, using a word processor, or using the Internet, or doing a Zoom call instead of a face-to-face meeting.
Don’t forget that this mix of excitement and apprehension is nothing new. Whenever groundbreaking technologies emerge, they’re met with both enthusiasm and anxiety.
I’m sure there used to be people sitting around a fire saying, “Kids these days. All they want to do is look at cave paintings instead of going out and hunting. Mark my words, these cave paintings will destroy civilization.” Or, “Kids these days. Writing stuff down. In my day, we used to have to memorize odes and sagas. You had to actually use your brain. Mark my words, this writing thing will destroy civilization.” Or, “Kids these days with their fires. Back in my day, we ate our meat raw and were happy about it. Mark my words…”
Yes, there’s a small but non-zero chance that AI will destroy civilization, as was the case with nuclear power, electricity, and even fire.
But I think we’ll get past it, and look back at the curmudgeons fondly, from the safe perspective of a future where we were mostly able to deal with AI’s downsides, and mostly benefit from its upsides.
Things to watch out for
Speaking of downsides, in addition to job losses, there are other potential risks of using generative AI for games and virtual worlds.
They include:
Homogenization of Worlds: If many world builders rely on the same AI tools and datasets, there’s a risk that worlds could start to feel generic or samey. The distinct style and creative fingerprint of individual artists and designers might be lost, leading to a homogenization of virtual environments. On the other hand, we’re already seeing this in OpenSim with the same free starter regions popping up on all the grids, and the same Creative Commons-licensed content showing up in all the grid freebie stores.
Unintended Biases: AI models can inherit biases from their training data, which could lead to the perpetuation of stereotypes or the underrepresentation of certain groups in generated content. This could result in virtual worlds that inadvertently reinforce real-world inequalities and lack diverse representation. On the other hand, AI could also help create greater variations in, say, starter avatars and skins. It all depends on how you use it — but is definitely something to watch out for.
Privacy Issues: In a virtual world, a user’s every interaction with the environment can be recorded and analyzed. Then AI can be used to tailor experiences specifically for each user, creating a more immersive, captivating world. But also — creepy invasion of privacy alert! OpenSim grid owners should be very transparent about what information they collect and how they use it.
OpenSim grids and AI: a plan for action
First, start experimenting with generative AI for the low-hanging fruit: non-vital marketing images, marketing text, social media content, that kind of thing.
Don’t use AI to generate images of what your world looks like in order to deceive people. That will backfire in a big way. Use it to generate logos, icons, generic background illustrations — things that don’t matter to your customers but make your content a little nicer to consume.
Don’t use AI to generate filler text. Use it to turn information into readable content. For example, if you have an announcement, you can take your list of bullet points and turn it into a readable press release. If you’re a non-native-English speaker, turn your ungrammatical scribbles into an engaging, properly written blog post. If you have a video tutorial, turn the transcript into a how-to article for your website — or turn your how-to article into a video script.
Then use AI to turn those useful, informative blog posts, press releases and videos into social media content.
One piece of advice: when creating this content, don’t be generic and impersonal. Add in your personal experience. Show your real face, give your real name, explain how your personal background has led you to this topic. Even as you use AI to improve the quantity and quality of your content, also lean into your human side to ensure that this content actually connects with your audience.
You can also ask ChatGPT, Claude, or your preferred large language model of choice for business and marketing advice. Remember to give it as much information as possible. Tell it what role you want it to play — experienced financial advisor? small business coach? marketing expert? — and provide it with background on yourself and your company, and tell it to ask you questions to get any additional information it needs before giving your advice. Otherwise, it will just make assumptions based on what’s most likely. As the old saying goes, if you assume, you make… and garbage in, garbage out.
Many OpenSim grids have plenty of room for improvement when it comes to business management, marketing, and community building. The AI can help.
Next, start looking for ways that generative AI can improve your core product. Can it help you write scripts and code? Create 3D objects? Create terrains? Generate interactive games? Suggest community-building activities and events? Create in-world interactive avatars?
These capabilities are changing very quickly. I personally stay on top of these things by following a few YouTube channels. My favorites are Matt Wolfe, The AI Advantage, and Matthew Berman.
If you know of any other good sources for up-to-date generative AI news useful for virtual world owners, please let us know in the comments! And are there any specific AI-powered tools that OpenSim grids are using? Inquiring minds want to know!
This past weekend, I went to the local Apple store and got a demo of the new Apple Vision Pro headset — the one where you’ll spend a minimum of $3,500 and more likely $4,000 if you buy it.
A few years ago, I traded my iPhone for an Android because Samsung released the Gear VR headset and Apple didn’t have anything similar in the pipeline.
I still miss my iPhone, but all the phone-based VR action has been on the Android side. However, Samsung dropped its Gear VR project, and Google stopped developing its Cardboard and Daydream View platforms.
So I’m open to going back to the Apple ecosystem, if there’s something worth switching for.
Is the Apple Vision Pro the reason to switch? No.
Was the demo educational? Yes, and I’m going to tell you what I learned.
And, at the end of this article, I’ll explain who should buy the headset now, and who should wait for two or three generations.
But first, why I’m not going to switch to the iPhone and buy an Apple Vision Pro, even though I cover tech so could deduct it as a business expense.
I can’t do my work on it
Even putting aside the fact that my work computers are all Windows, and the Vision Pro only pairs with Apple computers — and late-model computers at that — the headset itself isn’t optimal for prolonged use.
It’s heavy so you don’t want to wear it for hours. There’s no usable virtual keyboard — you’d need to use a physical keyboard, anyway. It’s hard to drink coffee in it. And you can’t attend Zoom meetings in it. Yes, you can Facetime — but only as a cartoon avatar.
And, of course, it doesn’t replace a computer. It’s an add-on to a computer. It’s basically a single external monitor for my computer. I already have two giant monitors, and the prices for monitors are ridiculously cheap now, anyway. If I wanted to upgrade a monitor, I’d just upgrade the monitor itself and not switch to a VR headset.
Still, the graphics are amazing. I enjoyed the almost-completely-realistic resolution of the display.
There’s no killer app
I didn’t see anything during the demo that I absolutely had to have, and would use all the time.
If there was a killer app, then maybe I’d get the headset, and then use the other stuff because I have the headset on all the time, anyway — might as well do everything in VR.
That’s what happened with smartphones. We got the smartphones because we needed a phone anyway. You can’t live without a phone. And once you have the phone, might as well use it as a camera, as a GPS, as an ebook reader, as a music player, as a note-taking app, as a calendar, and as a casual gaming device. Not to mention all the thousands of other apps you can use on a smartphone.
Will that happen with VR? No. Nobody is going to spend their life inside a VR headset.
It will happen with AR. I’m still totally convinced that AR glasses are the future. They will replace our phones, and, since we have the AR glasses on anyway, we’ll use them for work, we’ll use them for music and movies and games and social media and everything else.
But right now, we’re not there. The Apple Vision Pro wants to be there — it’s pass-through camera makes the device usable for augmented reality. But it’s not an always-on, always-with-you device. And until it is, we’ll still use all the other stuff.
It’s too expensive for just fun and games
Sure, there are a handful of games for the Vision Pro. And you can watch movies on a giant personal screen.
But there are far, far cheaper ways to play VR games, with much bigger selections. And there are already very cheap and lightweight glasses that let you watch movies if you want that kind of thing. Or you can just buy a slightly bigger TV. TVs are getting ridiculously cheap these days.
Also — I already own a big TV set. And I can watch my TV with other people. I can’t watch movies on the Vision Pro with other people.
Now, lets talk about what I learned about the future from getting this demo.
Seeing an overlay over reality is awesome
Yes, the Meta Quest has a pass-through camera but the video quality is lousy.
The Apple Vision Pro’s video quality is awesome. It’s almost like looking through a pane of glass. Not exactly glass — it fakes it with video — but close enough. I was extremely impressed.
So the idea of a transparent pair of glasses that can turn into AR or VR glasses on demand — it’s within reach. And these transparent glasses are going to be awesome for augmented reality. And we can replace our phones with these glasses.
If you want to see what that world will be like, go to the nearest Apple store and get your own Vision Pro demo.
The interface of the future will be gesture-based
Remember how, in The Minority Report, Tom Cruise moved images around with his hands?
That’s what the Vision Pro interface is like. They have cameras on the headset that can see your hands. In fact, even if my hand was hanging down by my side, it still registered if I made a pinching gesture.
No controllers necessary.
This will be great in the future when we wear smart glasses all the time because we won’t have to carry controllers around. The fewer things we have to carry around, the better.
But the big progress that Apple made with the interface is the eye-tracking. To click on something, you just look at it and pinch your fingers. That means that you don’t have to have your hands up in the air in front of you all the time. That would get tired. I mean, how long can Tom Cruise stand there, waving his arms around? No matter how fit you are, that’s going to get tiring.
And, like I said, you don’t need to raise your arm to make the pinching gesture. You can keep your hand down on your lap, or by your side, or on your desk.
You still need to raise your arms to resize windows, or to drag them around, but how often do you need to resize a window, anyway?
Who should buy the headset today
If you’re building an AR or VR platform for the future, you should definitely check out the Vision Pro and see what possibilities are offered by the pass-through camera and the eye-tracking-and-pinching control system.
But, unless your company is paying for the device, return it within the two-week period.
The only reason to keep it is if you are currently developing apps for the Apple Vision Pro. Then, you need the device to test your apps.
If you’re anyone else, buy a larger TV and computer monitor and a PlayStation VR or a Quest to play games on and you’ll still be around $3,000 ahead.
But do go and get a demo. It’s free, and it might give you some ideas for apps or business opportunities for a few years down the line.
The total land area on OpenSim’s public grids reached the equivalent of 131,880 standard regions this month, an all-time high, with an increase of 369 standard regions compared to last month. Meanwhile, the total number of registered users went up by more than 2,100 and the number of active users rose by over 2,200.
I’m now tracking a total of 2,653 public grids, of which 316 were active, and 251 published their statistics this month. If you have a stats page that we’re not tracking, please email me at maria@hypergridbusiness.com — that way, your grid will be mentioned in this report every month, for additional visibility with both search engines and users.
Also, I’m no longer sending out a monthly email blast reminding OpenSim grid owners to send me news and updates for this report. If you have news, please email me before the tenth of the month if you want a short item included in this monthly wrap-up. For longer news, feel free to send me press releases at any time.
Our stats do not include many of the grids running on DreamGrid which is a distribution of OpenSim since these tend to be private grids.
OpenSim is a free open-source, virtual world platform, that’s similar to Second Life and allows people with no technical skills to quickly and cheaply create virtual worlds and teleport to other virtual worlds. Those with technical skills can run OpenSim worlds on their servers for free using either DreamGrid, the official OpenSim installer for those who are more technically inclined, or any other distribution, while commercial hosting starts at less than $5 a region.
Every month on the 15th — right after the stats report comes out — we will be sending out a newsletter with all the OpenSim news from the previous month. You can subscribe here or fill out the form below.
Top 25 grids by active users
When it comes to general-purpose social grids, especially closed grids, the rule of thumb is the busier the better. People looking to make new friends look for grids that already have the most users. Merchants looking to sell content will go to the grids with the most potential customers. Event organizers looking for the biggest audience — you get the idea.
All region counts on this list are, whenever available, in terms of standard region equivalents. Active user counts include hypergrid visitors whenever possible.
Many school, company, or personal grids do not publish their numbers.
The organizers of OSFest 2024, an annual festival that will be held this coming October, recently opened voting for this year’s theme to their Discord community. Festival participants can vote by reacting with a specific emoji to indicate their preferred theme from a selection of options, either via the OSFest Discord server or by sending an email to opensimfest@gmail.com.
Voting closes on January 24.
“For OSFest 2023, we took a poll to decide which theme we had that year,” said Lisa Laxton, OSFest director and founder of the Infinite Metaverse Alliance and Laxton Consulting. “Since the community liked this approach giving them a voice for this hypergrid community event, we are doing it again for OSFest 2024 but early in the planning based on feedback from last year’s participants.”
An email announcement went out this month with details on the festival dates from October 4 to October 20 and welcoming merchants, exhibitors and performers to take part. The festival takes place on the OSFest Grid, with some free land parcels provided to qualifying merchants and exhibitors.
Suggested themes for exhibitors this year include “Futurism,” “Silk Road and Asian history,” “Burning Man and Woodstock expressions,” “Hypergrid community unity,” “creative art and architecture” or no set theme. Performers and merchants are not required to follow the selected theme.
“Participants asked for a longer lead time and theme so they can plan and create their builds months before the grid is open for them to transfer builds from their home grids,” Laxton said.
Organizers plan to allow voting until January 24 before deciding on this year’s official theme.
“For two full weeks, including weekends, we’ll have music, dance, art, and merchant expos in one place,” said Laxton. “This is an all volunteer effort with a limited number of free parcels for exhibitors and merchants provided by the grid sponsors.”
OSFest organizers are also looking for greeters, performers, artists, builders, scripters, merchants, and promoters.
“The more volunteers we have, the more time we each get to spend visiting exhibits and stores as well as attending events within the OSFest Grid,” Laxton said.
The total land area on OpenSim’s public grids reached the equivalent of 131,511 standard regions this month, an all-time high, on an increase of more than 1,000 standard regions compared to last month. However, the total number of active users went down by over 3,500.
Part of the decrease was due to the fact that the OpenSim Community Conference was last month, driving up those numbers. A few grids also failed to report stats this month, or had outages, or closed, but nothing major — for the most part, the decrease was due to lower numbers across the board, possibly due to the winter holidays.
Scaling back for 2024
You might notice that this article is substantially shorter than previous’ months. I’m scaling back my OpenSim coverage this year to make more time for other projects. If you have a press release, you can send it to me, but the closer it is to publishable format, the quicker I’ll be able to post it on the site.
I’m now tracking a total of 2,653 public grids, of which 334 are active and 258 published their statistics this month. If you have a stats page that we’re not tracking, please email me at maria@hypergridbusiness.com — that way, your grid will be mentioned in this report every month, for additional visibility with both search engines and users.
Our stats do not include many of the grids running on DreamGrid which is a distribution of OpenSim since these tend to be private grids.
OpenSim is a free open-source, virtual world platform, that’s similar to Second Life and allows people with no technical skills to quickly and cheaply create virtual worlds and teleport to other virtual worlds. Those with technical skills can run OpenSim worlds on their servers for free using either DreamGrid, the official OpenSim installer for those who are more technically inclined, or any other distribution, while commercial hosting starts at less than $5 a region.
Every month on the 15th — right after the stats report comes out — we will be sending out a newsletter with all the OpenSim news from the previous month. You can subscribe here or fill out the form below.
Top 25 grids by active users
When it comes to general-purpose social grids, especially closed grids, the rule of thumb is the busier the better. People looking to make new friends look for grids that already have the most users. Merchants looking to sell content will go to the grids with the most potential customers. Event organizers looking for the biggest audience — you get the idea.
All region counts on this list are, whenever available, in terms of standard region equivalents. Active user counts include hypergrid visitors whenever possible.
Many school, company, or personal grids do not publish their numbers.
Hey there, Hypergrid Business readers. It’s the new year, and I’m moving my office and cleaning up, and have a few VR headsets sitting around that I’d like to get rid of.
They work, are hardly used, and one is even in its original — UNOPENED — box.
If you’re in the western Massachusetts area, and want to meet up, I can give you a free VR headset. Or if you’re anywhere in the world, and can pay for shipping, you can buy the brand-new one.
Here’s the one I’m selling
HP Reverb G2 VR headset
I don’t have my own picture of the headset itself because I haven’t opened the box. Yup, I bought it a year ago and never even opened it. It’s been sitting on a shelf in my office, and I realized that if I haven’t opened it yet, I’m never going to.
It runs for $599 on the HP website, currently on sale for $469, but it’s out of stock as I write this. I’m selling it for $400.
The box is unopened, so I don’t know exactly what’s in there for certain, but I bought it directly from HP and I’m reasonably sure that they put in everything it’s supposed to have.
Here’s the official picture of the headset itself:
It’s a fancy, high-end headset and comes with two controllers, has six degrees of movement, and is compatible with SteamVR and Windows Mixed Reality. The way it works is that you plug it into your computer, so there is a cable that you have to have on your head when you use it. So, unless you’ve got one of those computers that fits in a backpack, you’d probably be using this headset sitting down, or, at least, standing in one place close to your PC.
Here’s a picture of some guy using it, with the chair positioned just right so you can’t see the cable running from his head to the laptop:
Are you interested? Email me at maria@hypergridbusiness.com. I’m charging $400 plus shipping, so if you’re not too far away, it might be a good deal.
If nobody here is interested, I’ll put it up on eBay.
And here are the three free VR headsets I’m giving away:
HTC Vive
Comes with a couple of controllers plus a faceguard thing. It’s an all-in-one headset that you recharge with a USB cord. I think it’s the HTC Vive Focus Plus. It’s currently $449 on the official website, down from a regular price of $629. I’ve opened it and played with it, and no longer have the original packaging, so I’m just giving it away.
You don’t need a phone or a PC to use it, so it’s completely wireless. You do need a WiFi connection, though, to download apps and stuff.
If you’re around Western Massachusetts, we can meet up in some local coffee shop, and you can just have it. Or you can pay for shipping and I can box it up and send it to you. But, like I said, I don’t have the original packaging so I’ll have to bubble wrap it.
The controller has a little hidey-place inside the headset:
That’s also how you put your phone in it. For a list of compatible phones, see this official list from Google.
It can also run regular Google Cardboard apps, but then the controller won’t work.
Generic Google “Cardboard” headset
This is one of those cheap generic $10 headsets you can buy at Walmart that you put your phone into. It can run any Cardboard-compatible app.
I use it with my Android phone, but there’s even support for iPhones. There’s no controller with Cardboard, and no six degrees of movement. You can turn your head, but you can’t move your head laterally forward or backward, so if you’re not careful with how you use it, you can become dizzy quite easily. But you can use it to watch YouTube’s 360-degree videos in VR, and there’s a bunch of roller-coaster-type rides, some simple games, and, of course, porn.
If nobody here wants any of these free ones, I’ll give them away on Nextdoor or Craigslist, but I figured I’d give you guys first crack at them.
First, I’m giving my usual state of the hypergrid talk at 3:30 p.m. Pacific time. I’ll be doing a roundup of this year’s top news and OpenSim statistics.
Then, at 4 p.m. Pacific, I’ll be talking about how generative AI will change content creation and coding.
I have been covering AI quite a bit lately, especially for CIO magazine. You can see all my latest AI articles here. As part of that, I’ve been talking to CEOs, CIOs and other senior executives at companies around the world, as well as leading experts on AI and the vendors building the technology. It doesn’t hurt that I have a degree in mathematics and can read the research papers. My own undergraduate research, funded by the NSF, was about a dynamical systems approach to differential equations. If you want more AI, and want to see me in the physical world, I’ll be the keynote speaker at the 2024 Data and AI Summit in March.
About the conference
It’s the eleventh annual OpenSimulator Community Conference, celebrating the community and development of the OpenSimulator opensource software. It will feature over 70 speakers leading presentations, workshops, panel sessions, and social events across the diversity of the OpenSimulator user base.
This year’s conference kicks off yesterday with networking events and today there will be art tours and music performances. The conference then features two days of dynamic presentations on Saturday and Sunday–including a hypergrid shopping tour and a closing night party on Sunday. There will also be more community events and tours following the conference weekend.
Attending the conference event is free, but those wishing to financially support the conference can still sponsor or participate in its crowdfunder campaign when registering. Participants in the crowdfunding cCampaign will receive a variety of thank-you gifts depending upon their level of participation, including conference VIP seating and the ability to have a virtual expo booth at the event. The conference sponsorship or crowdfunder contribution is tax-deductible to the extent allowable by law for US residents.
You can also choose to register to have an avatar account created for you locally on the OSCC conference grid server or hypergrid to OSCC via your home grid avatar.
It’s that time of year again when the open metaverse community comes together to share developments at the annual OpenSimulator Community Conference. The organizers have put out the call for presentation proposals for the 2023 event happening December 9 and 10.
This will be the eleventh year for the community-run conference, which spotlights the latest innovations in the OpenSim virtual world platform. The focus this year is on “inspiring our imagination and energizing our community” with talks from artists, educators, entrepreneurs, and builders at the forefront of shaping the open metaverse.
The call welcomes ideas from individuals or groups that demonstrate the “WOWNESS” of virtual worlds. Presenters are encouraged to share dramatic stories and make use of 3D props and visuals during their talks. The conference wants to highlight virtual worlds as a medium for creativity, enriched experiences, and real-world impact.
In addition to 30-plus main stage presentations, the two-day event includes evening social gatherings with live music and art exhibits. There is also a launch party on Friday, December 8, to kick things off. The community also sponsors tours and events across the hypergrid after the main conference wraps up.
Previous conferences have drawn as many as 800 attendees to hear the latest on OpenSim development, innovative community projects, and the possibilities of interconnected virtual worlds.
If you have a great idea for a talk, workshop, panel, or demo, submit your proposal by the October 15th deadline. Help show what’s driving creativity and connection in the open metaverse this year.
Full details on the call for proposals can be found on the conference website.
The event is organized by the metaverse-focused nonprofit AvaCon, Inc. You can watch videos from the previous years of the conference on the AvaCon YouTube channel.
Hypergrid Business is a media sponsor, and I’ll probably be doing my usual stats presentation and moderating a panel. If you have any ideas for a panel, email me at maria@hypergridbusiness.com.
Important Dates & Deadlines
October 2 – Announcement of the Call for Proposals!
October 22 – Proposals are due by 11:59 p.m. Pacific time
October 30 – Proposal acceptance emails and with conference information.
November 17 – Accepted speakers must register for the conference to create an entry in the conference schedule and the program.
November 11 & 18 – Speaker orientation and training sessions and presenter booth setup to prepare speakers for the conference.
November 27 – Deadline for stage props and audio-visuals (beyond textures) for the conference program.
Not that, for some people, they were ever there. Of the 22 NFTs in Pomposelli’s AviTron A Metaverse World collection, none have seen any recorded purchases.
But he’s not alone.
According to the new report, 79 percent of NFT collections had no sales.
But people still spent a fortune minting them, wasting enough electricity to power 2,048 homes for a year, the Guardian calculated, and emitting as much carbon as 4,061 round-trip flights from London to New Zealand. For nothing.
I love how the report points out that a lot of these NFT collections still have insanely high list prices that are completely detached from reality. As if anyone would actually pay over $13 million for some random NFT named MacContract that has only ever sold for $18.
Will NFT values ever recover? Experts say they’d need historical significance like Pokémon cards, artistic merit as true artworks, or real utility.
So no, those randomly-generated cartoon apes aren’t coming back.
And neither are random snapshots of OpenSim builds.
As for utility, some say NFTs can represent virtual land or items in the metaverse. But as we’ve seen with falling VR headset sales, the metaverse hype has faded.
Plus, while the NFTs can REPRESENT virtual land or items they are not, themselves, that land or items. They are pictures of that land or items. Pictures that you can go and take for free. Or grab copies of from the NFT websites themselves. That NFT above, “A gentleman in a metaverse,” does not give you actual ownership of that avatar, his clothes, or the landscape behind him. If you want to buy land in the metaverse, you can … actually, you can’t. Anybody who’s “selling” metaverse land is not actually selling anything, just renting you some server space. You don’t really get to own that server. If you want to own a server, you can go to the server store, buy a server, buy rights to the virtual content, and put it on the server. That is the only way, legally, to actually own virtual land. If you do this, I recommend using the DreamGrid version of OpenSim, which, as a bonus, happens to be free. They buy the rights to the build from its original creator or copyright holder. Then you’ll have actual ownership. And only then.
What people are actually “selling” are bragging rights. Like those companies that claim to name a star after you or rent you a square inch of space in Scotland. You don’t actually get a star named after you. You don’t actually buy any Scottish land. You just get a piece of paper with no legal significance. And, with NFTs, you don’t even get a piece of paper.
By the way, as of this writing, that NFT is selling for $37,532 on OpenSea. And you can just right-click or screenshot, and you’ve got your own copy. I just saved you $37,532. You’re welcome.
And, for future reference, if you want to buy something that you can’t yourself use, then don’t spend more money than you can afford to lose.
Despite his own data, Vlad Hategan, NFT Gaming Specialist at DappGambl, says that NFTs still have a place in the future.
“At DappGambl, we still maintain that once the dust has settled, we will start to see an evolution within NFTs,” he wrote, but added, “To weather market downturns and have lasting value, NFTs need to either be historically relevant (akin to first-edition Pokémon cards), true art, or provide genuine utility.”
I’m not sure how any of those things can be true. First of all, if the NFT platforms themselves will probably go away, since it costs money to keep the servers going and, the amount of money the platforms are collecting from the sales is steadily falling. At that point, the NFTs will no longer even exist, much less exist with any historical relevance. And we’ll always have the news articles for historians to look at.
As far as “true art” is concerned, I can see them being stunt art, like that Banksy painting that destroyed itself after it was bought. But there’s a limited number of buyers for self-destructing art. And, with the Banksy painting, it did have resale value — there was a piece of the painting left, plus the frame, and the shreds. With an NFT, there is literally nothing left other than the memory of its existence — and the news articles about the initial sale. Is there some value in being known as a giant idiot who blew a bunch of money on nothing? I guess… but I’m guessing that these idiots will move onto something else that they can waste money on, instead. Something fresh and hot and new. After all, there might be news value in being the first giant idiot, but nobody pays any attention to the thousandths giant idiot or the ten-thousandths idiot.
And genuine utility? Nobody’s come up with any uses for these things yet. They’re inefficient, ridiculously wasteful of resources, and astoundingly insecure. If someone steals your virtual wallet, there’s no FDIC insurance to get your money back.
I, personally, am glad that NFTs are dying a quick death. But I am a little worried about what’s going to come next.