Meta’s moderation change means more bad stuff will get through

(Image by Lawrence Pierce via Adobe Firefly.)

As a moderator myself, nothing could sound more disturbing than the idea of a revised social media moderation policy presented with the caveat that more bad stuff will get through.

Recently, Mark Zuckerberg announced that Meta, the company that heralded and then fumbled the metaverse, will be dialing back their moderation on their various platforms. He has explicitly claimed that, “…we’re going to catch less bad stuff…”

You can watch his presentation here.

This is especially menacing because Zuckerberg identifies bad stuff as including drugs, terrorism, and child exploitation. He also specifically says Meta is going to get rid of restrictions on topics like immigration and gender. They’re going to dial back filters to reduce censorship. Oh, and he says they’re ending fact-checking.

This is a mess.

Moderation is challenging. That challenge varies in relationship to the zeitgeist, the societal character of the times, which is quite complex these days. It also varies by platform. The scope of the challenge of moderation on Facebook is greater than at Hypergrid Business, yet the core issues are the same. Good moderation preserves online well-being for contributors and readers, while respecting genuine alternative perspectives.

At Hypergrid Business we have discussion guidelines that direct our moderation. Primarily, we apply moderation principles on content that is likely to cause personal harm, such as malicious derision and hate-speech towards specific groups or individuals.

At Hypergrid Business, malicious derision, a kind of bad stuff, was driving away contributors. However, letting in more malicious derision would not have improved the discussions. We know this because once discussion guidelines were instituted that removed malicious derision, more contributors posted more comments. So when Zuckerberg says Meta intends to get rid of moderation restrictions on topics like gender and immigration, we know from experience that the bad stuff will be malicious derision and hate-speech towards vulnerable and controversial groups, and this will not improve discussions.

The unfortunate ploy in Meta’s new moderation policies is the use of the expression, “innocent contributors” in the introductory video presentation. He says that the moderation policies on Meta platforms have blocked “innocent contributors”. Although the word ‘innocent’ typically conveys a neutral purity of positive disposition, intent and action, Zuckerberg, uses ‘innocent’ in reference to contributors whether they are the victims or the perpetrators of malicious commentary. This confounding use of the word “innocent” is a strategic verbal misdirection. Zuckerberg attempts to appear concerned while pandering to any and all sensibilities.

Zuckerberg’s emphasis, however, is not limited to moderation filters. Rather, he is laser focused on how Meta is going to end third party fact-checking entirely. Zuckerberg pins the rationale for his position on the assertion that fact-checking is too biased and makes too many mistakes. He offers no examples of what that alleged shortcoming looks like. Nonetheless, he puts a numerical estimation on his concerns and says that if Meta incorrectly censors just 1 percent of posts, that’s millions of people.

Zuckerberg further asserts that fact-checkers have destroyed more trust than they’ve created. Really? Again there are no real world examples presented. But just as a thought experiment, wouldn’t a 99 percent success rate actually be reassuring to readers and contributors? Of course he’s proposing an arbitrary percentage by writing the 1 percent statement as a misleading hypothetical, so in the end he’s simply being disingenuous about the issue.

Facts are essential for gathering and sharing information. If you haven’t got an assurance you’re getting facts, then you enter the fraught areas of lies, exaggerations, guesses, wishful thinking… there are many ways to distort reality.

It’s fair to say that fact-checking can fall short of expectations. Facts are not always lined up and ready to support an idea or a belief. It takes work to fact-check and that means there’s a cost to the fact-checker. A fact used in a misleading context leads to doubts over credibility. New facts may supplant previous facts. All fair enough, but understanding reality isn’t easy. If it were, civilization would be far more advanced by now.

Zuckerberg, however, has an obvious bias of his own in all of this. Meta doesn’t exist to ensure that we have the best information. Meta exists to monetize our participation in its products, such as Facebook. Compare this to Wikipedia, which depends on donations and provides sources for its information.

Zuckerberg argues against the idea of Meta as an arbiter of truth. Yet Meta products are designed to appeal to the entire planet and have contributors from the entire planet. The content of discussions on Meta platforms impacts the core beliefs and actions of millions of people at a time. To treat fact-checking as a disposable feature is absurd. Individuals cannot readily verify global information. Fact-checking is not only a transparent approach for large-scale verification of news and information, it’s an implicit responsibility for anyone, or any entity, that provides global sharing.

Facts are themselves not biased. So what Zuckerberg is really responding to is that fact-checking has appeared to favor some political positions over others. And this is exactly what we would expect in ethical discourse. All viewpoints are not equally valid in politics or in life. In fact, some viewpoints are simply wish lists of ideological will. If Zuckerberg wants to address bias, he needs to start with himself.

As noted, Zuckerberg clearly seems uncomfortable with Meta in a spotlight on the issue of fact-checking. Well, here’s a thought: Meta shouldn’t be deciding whether something is true or not, that’s what fact-checking services take care of. It places the burden of legitimacy on outside sources. The only thing Meta has to arbitrate are the contracts with fact-checking organizations for their fact-checking work. When Zuckerberg derides and discontinues third-party fact-checking he isn’t just insulating Meta from potential controversies. He uncouples the grounding and responsibilities of Meta contributors. As a consequence, stated in his own words, “…we’re going to catch less bad stuff…”

What Zuckerberg proposes instead of fact-checking is something that completely undermines the intrinsic strength of facts and relies instead on negotiation. Based on the Community Notes system on X, Meta only allows “approved” contributors to post challenges to posts. But the notes they post will only be published if other “approved” contributors vote on whether those notes are helpful… then an algorithm further processes the ideological spectrum of all those voting contributors to decide if the note finally gets published. Unsurprisingly, it has been widely reported that the majority of users never see notes correcting content, regardless of the validity of the contributor findings. Zuckerberg argues for free speech, yet Community Notes is effective censorship for suppressing challenges to misinformation.

Clearly, getting to the facts that support our understanding of the realities of our world is increasingly on us as individuals. But it takes effort and time. If our sources of information aren’t willing to verify the legitimacy of that information, our understanding of the world will absolutely become more, rather than less, biased. So the next time Zuckerberg disingenuously prattles on about his hands-off role supporting the First Amendment and unbiased sharing, what he’s really campaigning for is to allow the sea of misinformation to expand exponentially, at the expense of the inevitable targets of malicious derision. Remember, Zuckerberg’s bias is to encourage more discussions by all means, a goal which, for a platform with global reach, is greatly aided by having less moderation. Moderation that protects you at that scale is being undermined. Remember, Zuckerberg said it himself: “…we’re going to catch less bad stuff…”

Storylink Radio Brings Interactive Storytelling to the Opensim World’s Fair

(Image courtesy Storylink Radio.)

Storylink Radio is dedicated to the art of storytelling, keeping literature—both ancient and modern—alive and engaging through entertainment. Our mission extends beyond storytelling into immersive language learning, offering emergent readers and language learners a unique way to experience stories.

With a 40-region estate in Kitely, Storylink Radio hosts live storytelling sessions, simulcast on YouTube and in collaboration with the Seanchai Library in Second Life, allowing audiences to interact in real time. Our extensive catalog includes a podcast and YouTube channel featuring hundreds of hours of free storytelling programming.

Now, visitors can interact with authors and their characters at the Storylink Radio Exhibit at the OpenSim World’s Fair, hosted on Wolf Territories Grid.

Hypergrid link: grid.wolfterritories.org:8002:OpenSim Worlds Fair

An Interactive, AI-Driven Exhibit

The quarter-region exhibit showcases a collection of thematic vignettes, each tied to popular past storytelling sessions. Guests can explore multiple interlinked locations, accessible by walking bridges or a teleport network.

At every location, visitors can experience on-demand storytelling and engage with AI-powered personalities — the Aithereals — who bring characters and literary figures to life. These AI cast members, powered by OpenAI, are designed to embody their unique personas authentically, engaging guests in dynamic, real-time conversations.

Unlike scripted NPCs, Aithereals are autonomous and unscripted, ensuring unpredictable, immersive interactions—whether discussing their own stories, characters, or anything the guest wishes to explore.

How Guests Can Interact

Each location includes two clickable objects to guide engagement:

  • Guide Box– Provides details about the vignette and instructions on how to interact with Aithereals.
  • Random Q’s Box – While guests can chat freely with the AI cast, this box offers conversation starters.

Vignettes & Aithereals

  • Poe’s Parlor (Shadows of Poe): Aithereals Edgar Allan Poe, Lenore, The Raven
  • Wonderland (Alice in Wonderland – Tim Burton inspired): Aithereals Alice, Mad Hatter, White Rabbit, Cheshire Cat, Caterpillar, Red Queen, Talking Mushroom, Teapot
  • Frankenstein’s Lab (Bride of Frankenstein): Aithereals Victor Frankenstein, The Monster, Bride of Frankenstein
  • Dracula’s Castle(Dracula): Aithereals Dracula, Bram Stoker
  • Alice’s Restaurant: Aithereals Arlo Guthrie
  • The OASIS (Ready Player One/Ayn Rand/Rush 2112): Aithereals Parzival, Art3mis
  • Pirate Ship (High Seas Cthulhu): Aithereals Capt. Jack Sparrow
  • The Ice Cave (The Ice Dragon – George R.R. Martin/Game of Thrones): Aithereals The Ice Dragon
  • Griffin’s Nest (Fear of Falling): Aithereals Three Griffin Babies (unhatched)
  • SciFi Island (Blaze of Glory – Robert Silverberg): Aithereals Robert Silverberg, Isaac Asimov, C3PO, R2D2, Marvin (Hitchhiker’s Guide), Rachel Rosen (Blade Runner), Cameron Phillips (Terminatrix), “Ship” (a sentient space freighter)
  • Easter Island (Polynesian Myths & Legends): Aithereals Pele (Fire Goddess), The Moai (Three speaking stone figures)
  • Polynesia (Polynesian Myths & Legends): Aithereals Moana from Disney, Maui from classic mythology
  • Moby Dick (Moby Dick – Herman Melville, Abbreviated): Aithereals Capt. Ahab, Moby Dick (the whale)
  • Lighthouse (Lighthouse Terrors): Aithereals H.P. Lovecraft
  • Grotto: Aithereals The Kraken

OpenAI’s new reasoning AI model achieves human level results on intelligence test

(Image by Alex Korolov via OpenArt.)

I just watched OpenAI’s announcement about its upcoming o3 and o3-mini reasoning AI models, and they’ve reached an AI general intelligence milestone with this release.

The biggest news is that the o3 model took a major leap towards Artificial General Intelligence, or AGI, when it achieved human-level performance on a test where it had to solve new problems it had never encountered before.

AGI is a hypothetical type of AI that’s reached or surpassed human level cognitive abilities as well as the mental flexibility of the human brain to solve many different types of problems, with some researchers suggesting it could theoretically become sentient or self-aware. A scary take on this would be the Skynet AGI in the Terminator movies that wants to destroy all humans. Yikes.

On Friday, December 20th, OpenAI CEO Sam Altman and researchers Mark Chen and Hongyu Ren hosted a 22 minute livestream showcasing the new o3 models’ capabilities.

 

About five minutes into the OpenAI announcement, Altman and Chen welcomed Greg Kamradt, president of the ARC Prize Foundation, to talk about the o3 model’s AGI capabilities.

Kamradt announced that the o3 system scored 87.5% on the ARC-AGI benchmark test for general intelligence, way above the previous AI best score of 55%. Human performance for this test is around 85%, so the o3 model’s results have crossed into new territory in the ARC-AGI world, Kamradt said in the presentation.

“I need to fix my AI intuitions about what AI can actually do and what it’s actually capable of, especially in this o3 world,” he said.

Screenshot of ARC-AGI test. (Image courtesy OpenAI.)

ARC-AGI tests an AI system’s ability to figure out a previously unknown, or novel problem. In the example above, given by Kamradt during the OpenAI announcement, a human can easily figure out that a darker blue square is added next to the lighter blue squares to make a bigger square. This is the type of problem that’s been a real challenge for AI systems to figure out, until now.

The o3 and o3-mini models showed impressive results in other areas as well. For example, they were able to surpass human PhD-level thinking about difficult science questions. The o3 model reached a score of 87.7%, whereas a human PhD-level expert might get a score of around 70% in their chosen field.

o3 and o3-mini PhD level thinking. (Image courtesy OpenAI.)

OpenAI’s o3 model also achieved an impressive score of 25.2% on EpochAI’s Frontier Math benchmark, which is considered to be the toughest mathematical benchmark, said OpenAI’s Chen in the presentation.

“This is a dataset that consists of novel, unpublished, and also very hard problems,” Chen said. “It would take professional mathematicians hours or even days to solve one of these problems.”

EpochAI o3 Frontier Math results. (Image courtesy OpenAI.)

The big difference between o3 and o3-mini is that o3 is going to be the most powerful full version which is also the most expensive, while o3-mini is a more cost-effective model optimized for situations where the highest computational power isn’t necessary.

As for why OpenAI skipped straight from version o1 to o3, Altman said it was out of respect for their friends at Telefonica, which owns the O2 telecommunications brand, as well as OpenAI having a grand tradition of being “really, truly bad at names,” said Altman.

Coding like a pro

o3 Competition Code Challenge. (Image courtesy OpenAI.)

Are human coders about to be out of a job?

Some programmers are saying that nothing is going to change for them in the near future, but others are sounding the alarm about getting replaced by AI.

o3 just beat out 99% of competitive coders in a coding challenge. However, o3 also costs more than $1,000 to perform a single task at the highest compute setting.

So far, we’re going to need humans to understand what code we’re writing and to make decisions about how applications are going to be designed. But maybe not for long. Here’s a good take on how AI is going to affect programmers:

Revolutionizing industries

OpenAI will soon be releasing autonomous AI agents that can manage complex tasks without human intervention. Businesses large and small could benefit from AI performing jobs that only humans could do before.

The downside?

Besides smart AI being very energy intensive and expensive, many people could also be out of jobs as their industry is heavily disrupted by the use of increasingly capable AI models.

And there could be other unforeseen catastrophic effects we haven’t accounted for. Remember Terminator? Even today’s less capable AIs have shown the ability to be harmful or deceitful, and AI companies, including OpenAI, released those models anyway.

In this case, OpenAI seems like it’s taking safety seriously before o3 and o3-mini meet the public.

Safety researchers are currently being encouraged to sign up to test out the models before the public release, which is just around the corner. The more cost-effective 03-mini is slated for a late January release and the full 03 version will come out shortly after that.

What do you think about the future of AGI? Is AI going to take humanity into a bright and promising future, or will we all be hiding in ditches, machines guns in hand, hoping to preserve the small human population that we have left? Feel free to leave a comment and let us know what you think.

People think AIs are conscious. What could this mean for bots in OpenSim?

(Image by Maria Korolov via Adobe Firefly.)

I’ve been interacting with OpenSim bots — or NPCs — for nearly as long as I’ve been covering OpenSim. Which is about 15 years. (Oh my God, has it really been that long?)

I’ve been hoping that OpenSim writing would become by day job, but, unfortunately, OpenSim never really took off. Instead, I covered cybersecurity and, more recently, generative AI.

But then I saw some reporting about a new studies about AI, and immediately thought — this could really be something in OpenSim.

The study was published this past April in the journal Neuroscience of Consciousness, and it showed that a majority of people – 67% to be exact – attribute some degree of consciousness to ChatGPT. And the more people use these AI systems, the more likely they are to see them as conscious entities.

Then, in May, another study showed that 54% of people, after a conversation with ChatGPT, thought it was a real person.

Now, I’m not saying that OpenSim grid owners should run out and install a bunch of bots on their grids that pretend to be real people, in order to lure in more users. That would be dumb, expensive, a waste of resources, possibly illegal and definitely unethical.

But if users knew that these bots were powered by AI and understood that they’re not real people, they might still enjoy interacting with them and develop attachments to them — just like we get attached to brands, or cartoon animals, or characters in a novel. Or, yes, virtual girlfriends or boyfriends.

In the video below, you can see OpenAI’s recent GPT-4o presentation. Yup, the one where ChatGPT sounds suspiciously like Scarlett Johansson in “Her.” I’ve set it to start at the point in the video where they’re talking to her.

I can see why ScarJo got upset — and why that particular voice is no longer available as an option.

Now, as I write this, the voice chatbot they’re demonstrating isn’t widely available yet. But the text version is — and its the text interface that’s most common in OpenSim anyway.

GPT-4o does cost money. It costs money to send it a question and to get a response. A million tokens worth of questions — or 750,000 words — costs $5, and a million token’s worth of response costs $15.

A page of text is roughly 250 words, so a million tokens is about 3,000 pages. So, for $20, you can get a lot of back-and-forth. But there are also cheaper platforms.

Anthropic’s Claude, for example, which has tested better than ChatGPT in some benchmarks, costs a bit less — $3 for a million input tokens, and $15 for a million output tokens.

But there are also free, open-source platforms that you run on your own servers with comparable performance levels. For example, on the LMSYS Chatbot Arena Leaderboard, OpenAI’s GPT-4o in in first place with a score of 1287, Claude 3.5 Sonnet is close behind with 1272, and the (mostly) open source Llama 3 from Meta is not too far distant, with a score of 1207 — and there are several other open source AI platforms at the top of the charts, including Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.

I can easily see an OpenSim hosting provider adding an AI service to their package deals.

(Image by Maria Korolov via Adobe Firefly.)

Imagine the potential for creating truly immersive experiences in OpenSim and other virtual environments. If users are predisposed to see AI entities as conscious, we could create non-player characters that feel incredibly real and responsive.

This could revolutionize storytelling, education, and social interactions in virtual spaces.

We could have bots that users can form meaningful relationships with, AI-driven characters that can adapt to individual user preferences, and virtual environments that feel alive and dynamic.

And then there’s the potential for interactive storytelling and games, with quests and narratives that are more engaging than ever before, create virtual assistants that feel like true companions, or even build communities that blur the lines between AI and human participants.

For those using OpenSim for work, there are also applications here for business and education, in the form of AI tutors, AI executive assistants, AI sales agents, and more.

However, as much as I’m thrilled by these possibilities, I can’t help but feel a twinge of concern.

As the study authors point out, there are some risks to AIs that feel real.

(Image by Maria Korolov via Adobe Firefly.)

First, there’s the risk of emotional attachment. If users start to view AI entities as conscious beings, they might form deep, potentially unhealthy bonds with these virtual characters. This could lead to a range of issues, from social isolation in the real world to emotional distress if these AI entities are altered or removed.

We’re already seeing that, with people feeling real distress when their virtual girlfriends are turned off.

Then there’s the question of blurred reality. As the line between AI and human interactions becomes less clear, users might struggle to distinguish between the two.

Personally, I’m not too concerned about this one. We’ve had people complaining that other people couldn’t tell fantasy from reality since the days of Don Quixote. Probably even earlier. There were probably cave people sitting around, saying, “Look at the young people with all their cave paintings. They could be out actually hunting, and instead they sit around the cave looking at the paintings.”

Or even earlier, when language was invented. “Look at those young people, sitting around talking about hunting, instead of going out there into the jungle and catching something.”

When movies were first invented, when people started getting “addicted” to television, or video games… we’ve always had moral panics about new media.

The thing is, those moral panics were also, to some extent, justified. Maybe the pulp novels that the printing press gave us didn’t rot our brains. But Mao’s Little Red Book, the Communist Manifesto, that thing that Hitler wrote that I don’t even was aided and abetted by the books they wrote.

So that’s what I’m most worried about — the potential for exploitation. Bad actors could misuse our tendency to anthropomorphize AI, creating deceptive or manipulative experiences that take advantage of users’ emotional connections and lead them to be more tolerant of evil.

But I don’t think that’s something that we, in OpenSim, have to worry about. Our platform doesn’t have the kind of reach it would take to create a new dictator!

I think the worst that would happen is that people might get so engaged that they spend a few dollars more than they planned to spend.

Virtual curating frees artist

A virtual art gallery built to scale with imported artwork. (Image courtesy Lawrence Pierce.)

One of my interests is the relationship between the real world and the virtual. If the virtual can inspire or inform the real, it then transcends its technical isolation.

Curating an art exhibition is just such an opportunity. In the physical realm, curating is labor intensive, so decisions on placement carry considerable overhead. On the other hand, virtual simulation can be quite rapid and efficient, conditions that support flexible outcomes.

In real life, my profession is that of photographer. Photographers typically aren’t involved in the planning of an exhibition. We basically record what is, not what’s yet to come. But is there a practical way to use photography of the artwork to then rapidly create a virtual gallery and curate an exhibition that will exist in physical reality?

When Rafael Perea de La Cabada came to me for archival quality photography of his art in advance of an upcoming exhibition, our conversation turned to the challenges for him to curate the show. He wanted to explore various ideas but was feeling restricted by the physicality of moving artwork from his studio to the distant exhibition location, and then into various trial-and-error positions in the gallery. I proposed the creation of a virtual environment for rapid creation of a space that we could walk through, virtually, in which to curate his show quickly and creatively. The application for doing this was OpenSim.

In OpenSim, a simple box can be quickly stretched and resized to make a floor, wall, picture frame or a ‘canvas’ on which to apply the JPEG photographic image of an artwork. Photographic images of sculpture can be post-processed to have transparency around the art, preserved by saving to the PNG format (sculptural art photographed against a solid color background or black is fairly easy to separate from the background).

Lights are also available to simulate general illumination. Build times vary with the size of an exhibition, but a virtual gallery with all the essential details can be created in a handful hours, and curating the show can begin immediately as artwork images are uploaded.

Perea had previously engaged me to photograph his artwork with archival protocols. This meant high-resolution captures, flat, even lighting with suppression of ambient light contamination, and the inclusion of captures that included a color chart for setting white balance in post-production. While these are ideal images for all purposes, a virtual gallery could just as well be populated with basic clean phone photography. After all, the virtual gallery is created to facilitate curating a show, not making the final presentation.

Note: Depending on the number of artwork images and their native resolution, you may be able to handle file transfers via email, but if not, then a free account with Dropbox will have plenty of capacity to handle all the transfers.

To make virtual curating work, every artwork photo needs to be documented with real-world measurements. This is because the virtual gallery and contents will be built to scale, which is actually easier than it sounds. Artists typically already know the dimensions of their work. I put those dimensions into the filenames of the artwork images when I prepared them. This kept the dimensions intrinsically associated with each artwork. The artwork sizes were provided to me with the dimensions as inches. Since the viewer for OpenSim used metric I made conversions.

From file naming to texture mapping to collection building. (Image courtesy Lawrence Pierce.)

To build any gallery, it’s first necessary to acquire some reference material. For the Perea project, I knew the gallery would be the Ann Foxworthy Gallery at Allan Hancock College. Google searching produced a number of images, and the gallery director also supplied digital images and overall dimensions.

This gallery has a number of wall angles, but as with the art, I built the gallery to scale so all the components readily fit together. I also added some optional details (such as the track lighting). Note that the ceiling is best if it’s a single object (linked multiple objects if necessary) that can be moved aside, to provide easy placement access for the art.

The gallery with the ceiling moved for easy access to the art. (Image courtesy Lawrence Pierce.)

There is, of course, a learning curve to working in a virtual 3D environment, and this deters many people, including artists, from using software like OpenSim. Yet of all the 3D tools I’ve used, including Maya, Modo, SketchUp, ZBrush and 3D Studio, OpenSim is the fastest and most user friendly.

The trade-off is the ultimate visual quality. High-end applications like Maya are used for cinema quality CGI. OpenSim is not, by itself, capable of that level of visual realism.

ut our purpose was to curate an art exhibition virtually, which only the artist and a handful of other people would see. For that kind of project, OpenSim fills the bill, as you can see below in sample images.

The real gallery curated from the virtual gallery as reference. (Image courtesy Lawrence Pierce.)

For the Perea exhibition, he had a preliminary set of ideas as to how the art would be arranged. Once the virtual gallery was assembled, and the photos of his artwork were uploaded and attached to “canvases” sized to the dimensions of each piece, I moved each artwork into the initially proposed locations on the walls. Then we could begin to try various ideas. This process was so fast, we could often sit on the phone while I made the changes and forwarded screenshots via email.

Alternatively, Perea could have run his own OpenSim viewer and seen the changes I made, or make his own changes. There’s a lot of flexibility here, but in this case he focused more on direction and had me running the controls.

Perea commented more than once that this process was a great relief.

To give the virtual project an added sense of completion, I exported 360-degree panoramic views, making it possible to create a virtual tour. Recent Firestorm and Second Life viewers have a 360-degree snapshot feature.

Nine key positions were effective in providing a walk-through experience. The 360-degree panoramic views were also valuable for sharing and collaborating with other stakeholders, such as the gallery director. And to move back into the realm of the physical world, a photographic 360-degree virtual tour was made on-site. This preserves the exhibition, immersively and in perpetuity.

Tours were made with 3DVista, a commercial application for virtual tour production.

Curating an art exhibition takes careful planning. Much of that process is conceptual, but the actual installation of art requires considerable physical work. Using a 3D environment like OpenSim presents the opportunity for a gallery director, curator, or the artist to previsualize an exhibition immersively, at scale and relatively quickly. This then makes possible the highly effective exploration of curating options, before labor intensive and essentially permanent installation decisions are implemented.

Link to the 360-degree OpenSim virtual tour. Click on art to open info panel.

Link to the 360-degree photographic virtual tour. Again, click on art to open info panel.

3rd Rock, OpenSim’s second-oldest grid, is shutting down

HG Safari at the JFK Dallas build on 3rd Rock Grid. (Image courtesy HG Safari.)

3rd Rock Grid, the second-oldest grid in OpenSim, will be shutting down soon.

“We have to shut down the grid due to a few circumstances that have technical consequences, making it impossible to further manage the grid,” said 3rd Rock Grid board member Florin Spanachi, who is also known as Eldovar Lamilton in-world.

According to grid residents, the grid had lost a key member of its technical staff when he left suddenly. Then another key member, Kira Tiponi, passed away, leaving the grid without access to a key resource.

3rd Rock Grid is a non-profit, owned by the Netherlands-based Cultural Harbour foundation. According to Hypergrid Business records, it was founded in February of 2008, making it the second-oldest OpenSim grid after OSgrid.

“Technical help is not possible, nor will a fundraiser help,” Spanachi added.

He said that the grid is in contact with its active users to manage the exit as smoothly as possible.

As of this writing, 3rd Rock Grid has 196 active users, making it the 41st-largest by active user count.

It is also reporting a total land area of 872 standard region equivalents making it the 14th-largest grid by land area. It also has 13,615 registered users, making it the grid with the fifth-largest registered user base.

According to 3rd Rock Grid board member Tara Dockery, also known as Thoria Millgrove in-world, saving the grid would require a complete re-build.

“Due to a series of unfortunate events that had technical impact, an inaccessible server, and well over a decade of technical debt in the asset database, we are faced with an unmanageable grid,” she told Hypergrid Business. “There are people investigating ways to move forward and salvage has much as possible, but no firm decisions have been made, other than that we are shutting the existing grid down on May 15.”

OpenSim community dismayed and saddened

The OpenSim community was saddened to hear the news.

Marianna Montenes

“The 3rd Rock Grid holds such special memories for me,” said Marianna Monentes. “I visited as often as possible, and I’m deeply saddened to hear about the passing of one of its key techs. Please accept my sincere condolences to the grid owners and the entire community.”

Monentes is an in-world jewelry designer.

“I am deeply sorry about what happened to them,” said Andrew Simpson, owner of AnSky Grid.

“It goes without saying that it is always sad to see a grid go, regardless of the reason,” said Ansjela Amat, owner of Ansjelagrid, which, like 3rd Rock, is also based in the Netherlands.

“This is sad news,” said Myron Curtis, who said he can make resources available for grid or web hosting. Curtis is the owner of A Dimension Beyond, an OpenSim hosting company, and the founder of Virtual Worlds Grid.

Candy Cane Lane on the Holiday Isle region of 3rd Life Grid. (Image courtesy VisionZ.)

Offers of help

In fact, many grid owners are offering their help.

Terry Ford

One of those grid owners is Terry Ford, the original founder 3rd Rock Grid. Ford now runs DigiWorldz, a commercial grid and OpenSim hosting company, but has continued to provide technical support to 3rd Rock even after leaving.

“I would not like to see 3rd Rock Grid gone as it is a very important part of the OpenSim history,” he said.

3rd Rock was the first grid with a working permissions system and the first grid with a working economy, he said. It has also held a number of great events and fundraisers over the years, including several for Doctors Without Borders.

“There are many current and past members of 3rd Rock Grid, including myself, and some who have now passed away, who put in much effort to ensure it was a great grid to call home,” he said. “I have offered to help in any way I can and have reached out to many of the 3rd Rock Grid members voicing the same.”

Several grid owners suggested that it may be possible to reconstitute the grid by exporting and re-uploading the region files, also known as OARs, of the individual regions. OpenSim also has support for exporting individual user inventories.

Aerial view of Music Village. (Image courtesy 3rd Rock Grid.)

If the grid is not rescued, then residents will have to find new homes.

3rd Rock Grid residents who are able to get copies or their OAR region export files, or their IAR inventory export files, will also have many grids ready to welcome them.

“If any of the residents have the ability to extract their OARs and need a temporary home I am willing to set them up on a temporary basis with a four-by-four region,” said CatGrid owner Mike Cataldo, also known as Michael Timeless in-world. “Most of my residents are older military veterans but we are always willing to help those in need.”

He said that people are welcome to contact him directly at timeless.owltiger@gmail.com.

“While my grid is not as large as 3rd Rock Grid, I have spent time there in the past,” he said.

AvatarLife is also offering free land to 3rd Rock Grid residents.

“If they have OAR files of their lands we can get them to AvatarLife without any cost, as lands in AvatarLife are free,” said Sushant KC, CEO of AvatarLife, who said that he was said to hear that 3rd Rock Grid was closing down.

“I offer my technical support 24-7 if they want to start 3rd Rock Grid again from new servers,” said GBG World CEO Nick Mit, also known as Anytos Atlas in-world, who said he was so sorry to hear the news about 3rd Rock Grid.

GBG World also has free home plots available and is offers discounted region hosting to former 3rd Rock residents, he added.

Museum of Natural History on 3rd Rock Grid. (Image courtesy 3rd Rock Grid.)

“We are really saddened by the shut down of 3rd Rock and are happy to see how we can assist both the 3rd Rock team and any users in anyway we can,” said Paul Clevett, also known as Lone Wolf in-world. He is the director of Wolf Software Systems Ltd., the company that owns OpenSim’s largest and most popular world, Wolf Territories Grid.

He has previously told Hypergrid Business that he’s happy to help other grids with technical issues.

He said that he’s already been approached by some 3rd Rock residents. “We’re keen to help,” he said. “If they rent regions we are going to give them some bonus prims and also keep them together in the same area so they can keep their community.”

As a grid that dates all the way back to the earliest days of OpenSim, though, those technical issues can be significant.

Keeping grids active over many years requires a lot more work than people realize, said Kitely co-founder and CEO Ilan Tochner. “The longer grids are active the more technical expertise is required to overcome all the issues that accumulate over time.”

And the loss is even more devastating to the community when those grids close.

Ilan Tochner

“It’s tragic when grids close and their residents lose their home and all the content they’ve collected in their inventories,” he said. “It’s especially saddening when those grids are ones that have been an important part of the OpenSim ecosystem for as long as 3rd Rock Grid has.”

Kitely, in addition to being one of the biggest commercial grids in OpenSim, also runs the largest online marketplace for OpenSim, the Kitely Market.

Tochner said that if any customers bought content and had it delivered to 3rd Rock Grid, there’s a tool that can help merchants easily re-deliver content to all those customers.

“This Kitely Market feature is designed to enable merchants to easily and reliably help people recover the items they lost when their grid shuts down,” he said.

Zuckerberg’s VR Vision: Will Rejecting Google’s Android XR Cost Meta in the Long Run?

(Image by Maria Korolov via Adobe Firefly.)

Could Mark Zuckerberg’s lust-driven quest to own VR, AR and mixed reality end up costing Meta at the end of the day?

That’s a question being posed after The Information reported recently that Google had approached Meta partner with its new software platform — Google Android XR — that is being developed for virtual reality. While rumors have Meta talking to hardware companies like LG Electronics about building new devices using Quest’s software, there’s nothing to indicate Zuckerberg’s vision of shifting from using an open-sourced version of Google’s Android OS.

In fact, you could argue there’s even a little bit of bad blood in what is otherwise a solid working relationship.

“Zuckerberg believes he still has a significant market advantage, which he does,” said Rolf Illenberger, CEO of VRdirect, who has been instrumental in deploying VR and AI solutions for major enterprise clients like Nestle, Siemens, and Lufthansa. “Not only that, but Meta and Google both launched platforms back in 2018 — and while Google abandoned theirs, Zuckerburg has taken a lot of heat, and put a lot of dollars into developing Oculus and its audience over the last six years.”

“So, while there’s give and take, there’s far more to Meta wanting to plant its flag in VR,” he added.

That give and take is simple. Despite what would be described as a cordial and friendly working relationship, Google doesn’t let Meta offer the full range of apps on its headsets and working together. If they partnered more formally on Android XR, Google would be willing to offer greater access to those apps.

Meta addressed this in a statement blaming Google’s restrictive terms.

“After years of not focusing on VR or doing anything to support our work in the space, Google has been pitching AndroidXR to partners and suggesting, incredibly, that we are the ones threatening to fragment the ecosystem when they are the ones who plan to do exactly that,” said CTO Andrew Bosworth in a post on Threads earlier this month. “We would love to partner with them. They could bring their apps to Quest today!”

It would be a win for their developers and all consumers, he added, and said that Meta plans to keep pushing for it.

“Instead, they want us to agree to restrictive terms that require us to give up our freedom to innovate and build better experiences for people and developers,” he continued. “We’ve seen this play out before and we think we can do better this time around.”

It appears that Samsung will be the first hardware maker to use the Android XR and, according to The Information, Google has been pitching it to other hardware makers.

“One of Apple’s biggest go-to-market advantages with VR was its integrated ecosystem around the OS,” says Illenberger. “They have made the OS and development of apps around the VisionOS part of their priority, even with a first-generation Vision Pro that has a price point designed not to captivate a consumer market just yet.”

And if that weren’t enough, bickering between Meta and Google over software with Apple looming in the background brings up another thing — how something as simple as an app can move markets with these three involved, like iMessage.

“If you write someone a text, photo, or video on the Apple Vision Pro and it’s going to someone else in that ecosystem, whether on a phone, Watch or iPad, the bubble is still going to be blue,” Illenberger said. “Not that we’ve found the device to be particularly useful for it. But it’s amazing how something as simple as the color of a text bubble on a messaging app can move consumer interest between these three tech mega giants.”

How to use AI to write an opinion column

(Image by Maria Korolov via Adobe Firefly.)

So. You have some thoughts about where OpenSim is going. Or there’s a cool new fashion designer in OpenSim you want to tell people about. Or there’s a feature you’d really like to see implemented.

You’ve been thinking for a while about writing it up and sending it to Hypergrid Business to be published, but writing is just so much work!

Wouldn’t it be great if you could get an AI to read your mind and just write the article for you?

But you can’t. And if you just tell ChatGPT or Claude to “write an article about how great OpenSim is” you’ll get something generic and unreadable. Plus, it won’t have any of your unique insights or information that only you know, which is why you wanted to write the article in the first place.

(Image by Maria Korolov via Adobe Firefly.)

Here’s what you do.

If you’re like me, and think best while talking, then get a transcription app — I use the free Otter AI app and love it — and dictate your thoughts. Now, Otter only supports English, but there are other apps for other languages. Just Google for it.

Or, if you think in bullet point lists, create a list with the points you’d like to make. Don’t worry about grammar or spelling, or organization. Just do a brain dump.

Then open your favorite AI app — I recommend Claude AI because it doesn’t use your info for training data — and follow the following steps:

Cut-and-paste the following prompt:

I’d like to turn the following notes into an opinion column. The first thing I’d like you to do is read the notes and ask me questions. Is there anything that needs clarification or should be expanded on? Is there anything that doesn’t make sense? Are there any points that could use personal anecdotes or concrete examples? Thanks!

Then cut-and-paste your notes and hit the button to ask the question.

The AI should now ask you some follow-up questions. You can provide more information, or you can tell the AI to just skip that question, or ask the AI for what it would suggest.

Once you’re happy that everything has been pulled out of your head, you can go ahead and ask the AI to write the article.

Cut-and-paste the following prompt:

Please write a column based on my notes and our conversation. It should be in the first person, using Associated Press format, in a casual, blog writing style. Paragraphs should be short. Quotes should begin paragraphs. No conclusion needed. Use the inverted pyramid structure. Stick carefully just to the information that I provided.

Now it should provide you with a first draft of the column.

Now you can ask it to, say, rearrange sections, or add more information. And if it got anything wrong, tell it, and  Once you’re generally happy with how the column looks, ask the following questions:

  • Please review the story for accuracy. Are there any places where it contracts the information I gave you?
  • Please review the story’s organization and structure. Is the order the best possible order for this topic? Is anything repeated? Are any significant points not given enough time?
  • Please review the story for writing style. It should be casual and conversational, written at a fifth-grade level, and paragraphs should be short. Are there any areas that can be simplified or rewritten to be more personal?
  • Please review the story for grammar. Remember it should use American spelling and grammar and Associated Press style.

Then say:

Please rewrite the article per your recommendations.

Take one last look at the result. After all, this is going to go out under your name. Make sure that the AI isn’t putting words in your mouth that you wouldn’t say!

Now copy the final results into a separate document and make any edits you want to make. For example, you might change some wording to be more like something you’d say.

Also, add any relevant links. For example, if you’re talking about your OpenSim grid, add a link to the grid.

Then email it to me — in the body of the email is fine — at maria@hypergridbusiness.com. If you have snapshots or illustrations that you want to use, just attach them as JPG or PNG files to the email.

If you’d like to have AI generate an illustration to go with your column, I recommend that you use Adobe Firefly. Adobe only uses fully-licensed images for its training data — no lawsuits from artists here! — and pays artists when their work is used. In fact, the first payments to artists went out last September.

Use the “Widescreen (16:9)” aspect ratio for at least one image that you submit to Hypergrid Business, since we use wide images for the featured images on our site. You can also upload a reference image to give Firefly an idea of the kind of style you’re going for, or select a particular art or photography style from the lists provided.

Of course, you don’t have to submit your column to us! You can post it on your own blog or social media. And you can use the same approach to write any other kind of content — just adjust the prompt to fit. You can use this approach to write emails or to write marketing copy for your website.

And yes, being polite helps. The AIs seems to return better results when you’re nice to them.

I tried the Apple Vision Pro and saw the future, but don’t buy it yet

Maria Korolov at the Apple store in Holyoke, Massachusetts.

This past weekend, I went to the local Apple store and got a demo of the new Apple Vision Pro headset — the one where you’ll spend a minimum of $3,500 and more likely $4,000 if you buy it.

A few years ago, I traded my iPhone for an Android because Samsung released the Gear VR headset and Apple didn’t have anything similar in the pipeline.

I still miss my iPhone, but all the phone-based VR action has been on the Android side. However, Samsung dropped its Gear VR project, and Google stopped developing its Cardboard and Daydream View platforms.

So I’m open to going back to the Apple ecosystem, if there’s something worth switching for.

Is the Apple Vision Pro the reason to switch? No.

Was the demo educational? Yes, and I’m going to tell you what I learned.

And, at the end of this article, I’ll explain who should buy the headset now, and who should wait for two or three generations.

But first, why I’m not going to switch to the iPhone and buy an Apple Vision Pro, even though I cover tech so could deduct it as a business expense.

I can’t do my work on it

Even putting aside the fact that my work computers are all Windows, and the Vision Pro only pairs with Apple computers — and late-model computers at that — the headset itself isn’t optimal for prolonged use.

It’s heavy so you don’t want to wear it for hours. There’s no usable virtual keyboard — you’d need to use a physical keyboard, anyway. It’s hard to drink coffee in it. And you can’t attend Zoom meetings in it. Yes, you can Facetime — but only as a cartoon avatar.

And, of course, it doesn’t replace a computer. It’s an add-on to a computer. It’s basically a single external monitor for my computer. I already have two giant monitors, and the prices for monitors are ridiculously cheap now, anyway. If I wanted to upgrade a monitor, I’d just upgrade the monitor itself and not switch to a VR headset.

 

Maria wearing the Apple Vision Pro.

Still, the graphics are amazing. I enjoyed the almost-completely-realistic resolution of the display.

There’s no killer app

I didn’t see anything during the demo that I absolutely had to have, and would use all the time.

If there was a killer app, then maybe I’d get the headset, and then use the other stuff because I have the headset on all the time, anyway — might as well do everything in VR.

That’s what happened with smartphones. We got the smartphones because we needed a phone anyway. You can’t live without a phone. And once you have the phone, might as well use it as a camera, as a GPS, as an ebook reader, as a music player, as a note-taking app, as a calendar, and as a casual gaming device. Not to mention all the thousands of other apps you can use on a smartphone.

Will that happen with VR? No. Nobody is going to spend their life inside a VR headset.

It will happen with AR. I’m still totally convinced that AR glasses are the future. They will replace our phones, and, since we have the AR glasses on anyway, we’ll use them for work, we’ll use them for music and movies and games and social media and everything else.

But right now, we’re not there. The Apple Vision Pro wants to be there — it’s pass-through camera makes the device usable for augmented reality. But it’s not an always-on, always-with-you device. And until it is, we’ll still use all the other stuff.

It’s too expensive for just fun and games

Sure, there are a handful of games for the Vision Pro. And you can watch movies on a giant personal screen.

But there are far, far cheaper ways to play VR games, with much bigger selections. And there are already very cheap and lightweight glasses that let you watch movies if you want that kind of thing. Or you can just buy a slightly bigger TV. TVs are getting ridiculously cheap these days.

Also — I already own a big TV set. And I can watch my TV with other people. I can’t watch movies on the Vision Pro with other people.

Now, lets talk about what I learned about the future from getting this demo.

Seeing an overlay over reality is awesome

Yes, the Meta Quest has a pass-through camera but the video quality is lousy.

The Apple Vision Pro’s video quality is awesome. It’s almost like looking through a pane of glass. Not exactly glass — it fakes it with video — but close enough. I was extremely impressed.

And, Samsung showed off a transparent TV earlier this year.

So the idea of a transparent pair of glasses that can turn into AR or VR glasses on demand — it’s within reach. And these transparent glasses are going to be awesome for augmented reality. And we can replace our phones with these glasses.

If you want to see what that world will be like, go to the nearest Apple store and get your own Vision Pro demo.

The interface of the future will be gesture-based

Remember how, in The Minority Report, Tom Cruise moved images around with his hands?

That’s what the Vision Pro interface is like. They have cameras on the headset that can see your hands. In fact, even if my hand was hanging down by my side, it still registered if I made a pinching gesture.

No controllers necessary.

This will be great in the future when we wear smart glasses all the time because we won’t have to carry controllers around. The fewer things we have to carry around, the better.

But the big progress that Apple made with the interface is the eye-tracking. To click on something, you just look at it and pinch your fingers. That means that you don’t have to have your hands up in the air in front of you all the time. That would get tired. I mean, how long can Tom Cruise stand there, waving his arms around? No matter how fit you are, that’s going to get tiring.

And, like I said, you don’t need to raise your arm to make the pinching gesture. You can keep your hand down on your lap, or by your side, or on your desk.

(Photo by Terrence Smith.)

You still need to raise your arms to resize windows, or to drag them around, but how often do you need to resize a window, anyway?

Who should buy the headset today

If you’re building an AR or VR platform for the future, you should definitely check out the Vision Pro and see what possibilities are offered by the pass-through camera and the eye-tracking-and-pinching control system.

But, unless your company is paying for the device, return it within the two-week period.

The only reason to keep it is if you are currently developing apps for the Apple Vision Pro. Then, you need the device to test your apps.

If you’re anyone else, buy a larger TV and computer monitor and a PlayStation VR or a Quest to play games on and you’ll still be around $3,000 ahead.

But do go and get a demo. It’s free, and it might give you some ideas for apps or business opportunities for a few years down the line.

Apple and the bane of VR gentrification

Apple Vision Pro. (Image courtesy Apple.)

I recently read a CNN article on Tim Cook and the risk he’s taking with Apple Vision Pro. The gist of it is this: The Vision Pro will be Apple’s riskiest launch in years and could end up being the product that defines Tim Cook’s legacy.

What struck me is the point of view. The main concern is the fate of one person, Tim Cook. And fair enough, his legacy at Apple might indeed be affected by the success or failure of the Vision Pro. But I can’t help thinking about how there’s high visibility attention on a single wealthy-for-life individual — but far less talk about the societal impact of this latest Apple product, which, to my mind, conjures up the phrase, “gentrification of extended reality technologies.”

Have you seen the Apple Vision Pro movie? It’s a statement, not only about the product, but also of the affluence of the “neighborhood” that Apple associates with their target demographic. 

The thing about gentrification of physical neighborhoods is that it implicitly demotes the preceding locals and the context of their lives, despite marketing claims to the contrary. I’ve watched this happen first hand in the so-called Arts District in Los Angeles. Artists and artists’ lofts have, with few exceptions, given way to expensive upscale condominiums and trendy food and drink spots.

The first impression the Vision Pro movie makes is that Apple’s target demographic lives in immaculate upscale dwellings ostensibly in an upscale neighborhood. Of course this compliments the marketing of an AR-forward technology that includes seeing the physical environment while the projected interactions are displayed as a visual overlay.

However, a cluttered wall or messy pile of clothes is also going to be a part of the Vision Pro experience, and a major distraction I think. The minute I see highly staged and perfected environments in the marketing I suspect that gentrification – in this case, gentrification of our dwellings – is in play.  

Apple Vision Pro. (Image courtesy Apple.)

If that was the end of it, we could excuse it as marketing pretention to flatter the product. But then there’s the retail cost of $3,500 for the privilege of Vision Pro ownership and the case for gentrification becomes unavoidable.

When VR and AR, component technologies of XR, were still emerging from niche implementations, it was interesting that Google created viewers out of cardboard, to take advantage of the ubiquitous technology of the cell phone and provide some form of XR experience to virtually anyone, anywhere. The cost of entry was exceedingly low, although we understandably bemoaned the lack of apps and motion sickness.

Subsequently, however, the push for a superior stand-alone headset has seen rising costs while still not achieving widespread adoption. Consumers have balked at the increasing retail price of the Meta Quest headset, which has doubled between the two latest versions. Still, I suppose it’s something to say that it comes in under $1,000, similar to the cost of a well-configured recent model iPhone.

Now with the Vision Pro, however, Apple has really upped the ante and set its sights on a privileged few. At $3,500, it costs five times as much as the Meta Quest 3 and ten times as much as the Meta Quest 2. It’s priced like a very well configured MacBook Pro, but without the corresponding breadth of software ecosystem to power it. 

Hopefully the cost of anything is, first and foremost, a reflection of its relative value.

Well, as shown in the Vision Pro movie, the primary functionality of the Vision Pro is watching visuals, entertainment and video chats. So, your friend can appear to be hovering over your bed as you chat and walk about the room. Amazing? Sure, but who needs this?

Perhaps truly absurd is the person packing a suitcase, while wearing the headset and then taking a video call. It’s already challenging that there’s a headset and tethered battery to wear at all, but to wear it while doing a real-world chore, just in case a call comes through? No one… literally no one with an ounce of practicality is going to do that.

Yet the implication is that if you want to stay connected, you should want to do that, at all times. Ironically it also suggests that your cell phone, which easily slips out of the way into a pocket or waits, also out of the way, patiently on a counter, has become just so… passé, so… inadequate. Gentrification of your phone calls never looked so sci-fi, yet so pointless.

Apple Vision Pro. (Image courtesy Apple.)

Consequently, my concern is that this whole class of technology still won’t become ubiquitous like the cell phone. The potential benefits of XR’s components, VR and AR, could be enormous for everyone. 

But like gentrification of a neighborhood, people will be priced out of the Apple XR privilege in droves. There will be fewer customers, but with necessarily greater economic means. Their needs and desires will take over the paradigm and be the influence for most content.

And consider this: the Apple marketing movie shows movie watching with the Vision Pro. Are you in a family of, let’s say, four? Well, that’s $14,000 in headsets for everyone to take part together.

Yet Apple touts an inclusive paradigm of the Vision Pro by displaying an uncanny valley version of your face on the headset to people who look at you. But rather than inclusivity, the implicit message is, “I live in a world you can’t experience without affluence.”

Apple Vision Pro. (Image courtesy Apple.)

I’m skeptical that Apple has cracked the code for selling the world on XR, but we may nonetheless be witnessing the gentrification of a technology.

Of course, it’s not so much that Apple is trying to gentrify this domain. Solving the challenges of this technology has been expensive, and the devices we’d be happy with would inevitably be expensive, at least at first. I just hope XR doesn’t remain a vanity project for Apple with usefulness based on deep pockets and superficial ideas of what we need to lead meaningful lives.

Update: I’ve been online at Apple’s Vision Pro sales page to see what kinds of options are available for the Vision Pro. To my surprise, the first step you’re compelled to complete is a scan of your head for measurements needed by Apple to include the correct fit of Light Seal and head bands. You’ll need an iPhone or iPad with Face ID to find the right size. If you’re on a desktop computer, you’ll also scan a circular Apple code on the screen that synchronizes their site with your captured head dimensions.

After looking left, right, up and down, twice, your dimensions are submitted to Apple. The next step is to select options for your vision, whether you have a prescription, contacts or readers. You won’t need precise prescription information because the inserts are generalized and accommodate most prescriptions. The optical inserts run between $99 and $149.

After all of the sizing procedure, you’re able to select a storage memory size, from 256GB to 1TB. The 1TB option is $3,899.

My final point is this: If there was any doubt this device is a vanity device, the custom fit and optical inserts tell you that each Vision Pro is tailored primarily for just for one person. Since the optical inserts attach magnetically, you could swap them out with another user, but in practice, is that practical? And what about the Light Seal and head bands, also sized to fit?

Of course, the real measurements that count are product sales and paradigm adoption rates.