People think AIs are conscious. What could this mean for bots in OpenSim?

(Image by Maria Korolov via Adobe Firefly.)

I’ve been interacting with OpenSim bots — or NPCs — for nearly as long as I’ve been covering OpenSim. Which is about 15 years. (Oh my God, has it really been that long?)

I’ve been hoping that OpenSim writing would become by day job, but, unfortunately, OpenSim never really took off. Instead, I covered cybersecurity and, more recently, generative AI.

But then I saw some reporting about a new studies about AI, and immediately thought — this could really be something in OpenSim.

The study was published this past April in the journal Neuroscience of Consciousness, and it showed that a majority of people – 67% to be exact – attribute some degree of consciousness to ChatGPT. And the more people use these AI systems, the more likely they are to see them as conscious entities.

Then, in May, another study showed that 54% of people, after a conversation with ChatGPT, thought it was a real person.

Now, I’m not saying that OpenSim grid owners should run out and install a bunch of bots on their grids that pretend to be real people, in order to lure in more users. That would be dumb, expensive, a waste of resources, possibly illegal and definitely unethical.

But if users knew that these bots were powered by AI and understood that they’re not real people, they might still enjoy interacting with them and develop attachments to them — just like we get attached to brands, or cartoon animals, or characters in a novel. Or, yes, virtual girlfriends or boyfriends.

In the video below, you can see OpenAI’s recent GPT-4o presentation. Yup, the one where ChatGPT sounds suspiciously like Scarlett Johansson in “Her.” I’ve set it to start at the point in the video where they’re talking to her.

I can see why ScarJo got upset — and why that particular voice is no longer available as an option.

Now, as I write this, the voice chatbot they’re demonstrating isn’t widely available yet. But the text version is — and its the text interface that’s most common in OpenSim anyway.

GPT-4o does cost money. It costs money to send it a question and to get a response. A million tokens worth of questions — or 750,000 words — costs $5, and a million token’s worth of response costs $15.

A page of text is roughly 250 words, so a million tokens is about 3,000 pages. So, for $20, you can get a lot of back-and-forth. But there are also cheaper platforms.

Anthropic’s Claude, for example, which has tested better than ChatGPT in some benchmarks, costs a bit less — $3 for a million input tokens, and $15 for a million output tokens.

But there are also free, open-source platforms that you run on your own servers with comparable performance levels. For example, on the LMSYS Chatbot Arena Leaderboard, OpenAI’s GPT-4o in in first place with a score of 1287, Claude 3.5 Sonnet is close behind with 1272, and the (mostly) open source Llama 3 from Meta is not too far distant, with a score of 1207 — and there are several other open source AI platforms at the top of the charts, including Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.

I can easily see an OpenSim hosting provider adding an AI service to their package deals.

(Image by Maria Korolov via Adobe Firefly.)

Imagine the potential for creating truly immersive experiences in OpenSim and other virtual environments. If users are predisposed to see AI entities as conscious, we could create non-player characters that feel incredibly real and responsive.

This could revolutionize storytelling, education, and social interactions in virtual spaces.

We could have bots that users can form meaningful relationships with, AI-driven characters that can adapt to individual user preferences, and virtual environments that feel alive and dynamic.

And then there’s the potential for interactive storytelling and games, with quests and narratives that are more engaging than ever before, create virtual assistants that feel like true companions, or even build communities that blur the lines between AI and human participants.

For those using OpenSim for work, there are also applications here for business and education, in the form of AI tutors, AI executive assistants, AI sales agents, and more.

However, as much as I’m thrilled by these possibilities, I can’t help but feel a twinge of concern.

As the study authors point out, there are some risks to AIs that feel real.

(Image by Maria Korolov via Adobe Firefly.)

First, there’s the risk of emotional attachment. If users start to view AI entities as conscious beings, they might form deep, potentially unhealthy bonds with these virtual characters. This could lead to a range of issues, from social isolation in the real world to emotional distress if these AI entities are altered or removed.

We’re already seeing that, with people feeling real distress when their virtual girlfriends are turned off.

Then there’s the question of blurred reality. As the line between AI and human interactions becomes less clear, users might struggle to distinguish between the two.

Personally, I’m not too concerned about this one. We’ve had people complaining that other people couldn’t tell fantasy from reality since the days of Don Quixote. Probably even earlier. There were probably cave people sitting around, saying, “Look at the young people with all their cave paintings. They could be out actually hunting, and instead they sit around the cave looking at the paintings.”

Or even earlier, when language was invented. “Look at those young people, sitting around talking about hunting, instead of going out there into the jungle and catching something.”

When movies were first invented, when people started getting “addicted” to television, or video games… we’ve always had moral panics about new media.

The thing is, those moral panics were also, to some extent, justified. Maybe the pulp novels that the printing press gave us didn’t rot our brains. But Mao’s Little Red Book, the Communist Manifesto, that thing that Hitler wrote that I don’t even was aided and abetted by the books they wrote.

So that’s what I’m most worried about — the potential for exploitation. Bad actors could misuse our tendency to anthropomorphize AI, creating deceptive or manipulative experiences that take advantage of users’ emotional connections and lead them to be more tolerant of evil.

But I don’t think that’s something that we, in OpenSim, have to worry about. Our platform doesn’t have the kind of reach it would take to create a new dictator!

I think the worst that would happen is that people might get so engaged that they spend a few dollars more than they planned to spend.

AI war breaks out between tech giants

(Image by Maria Korolov via Midjourney.)

After ChatGPT was released on Nov. 30, 2022, the world changed. Whatever you might personally think about AI, the events of last year showed that AI was capable of human-level creativity in art, music, writing, and coding. And, for the first time, AI demonstrated common sense. Or, at least, something close enough to common sense for all practical purposes.

Companies like Google that had been sitting on their AI projects for years, unwilling to do any damage to their existing business models, are having to rethink their plans.

Google announced a Code Red and brought cofounders Sergey Brin and Larry Page back from retirement. It has also invested $400 million in OpenAI rival Anthropic, which has its own version of ChatGPT called Claude, which is in closed beta but early users say it’s better.

Apple is holding an AI summit for employees next week, the company’s first live and in-person event in years.

Microsoft takes the lead

“The AI race starts today,” said Microsoft CEO Satya Nadella at a press conference today.

The company announced that it’s integrating AI chat into the Bing search engine and its Edge browser — after it invested a reported $10 billion into ChatGPT maker OpenAI last month. The company has also previously announced plans to integrate AI throughout its entire portfolio products.

Adding AI chat to Bing, however, is a direct shot at Google’s bow.

“We can improve the way billions of people use the Internet,” said Yusuf Mehdi, Microsoft’s consumer chief marketing officer, in today’s presentation.

As of the end of 2022, Bing only had a 9 percent share of the search engine market. Google had 85 percent, and the rest was split between Yahoo, Baidu, Yandex, DuckDuckGo and other competitors, all of whom were in the low single digits.

So Bing has a lot of opportunity for improvement.

And speaking of Baidu, a Chinese search engine, it also plans to launch its own AI chatbot, called Ernie Bot. According to CNN, it’s expected to go live in March and is currently being tested internally.

This OpenAI GPT-3 Powered Demo Is A Glimpse Of NPCs In The Future

The developer of Modbox linked together Windows speech recognition, OpenAI’s GPT-3 AI, and Replica’s natural speech synthesis for a unique demo: arguably one of the first artificially intelligent virtual characters.

Modbox is a multiplayer game creation sandbox with SteamVR support. It officially launched late last year after years of public beta development, though it’s still marked as Early Access. We first tried it back in 2016 for HTC Vive. In some ways Modbox was, and is, ahead of its time.

The developer’s recent test using two state of the art machine learning services – OpenAI’s GPT-3 language model and Replica’s natural speech synthesis – is nothing short of mind-blowing. Start at roughly 4 minutes 25 seconds to see the conversations with two virtual characters.

Microsoft, which invested $1 billion in OpenAI, has exclusive rights to the source code & commercial use of GPT-3, so this feature is unlikely to be added to Modbox itself. But this video demo is the best glimpse yet at the future of interactive characters. Future language models could change the very nature of game design and enable entirely new genres.

There is an uncomfortably long delay between asking a question and getting a response because GPT-3 and Replica are both cloud-based services. Future models running on-device may eliminate the delay. Google & Amazon already include custom chips in some smart home devices to cut the response delay for digital assistants.

How Is This Possible?

Books, movies and television are character-centric. But in current video games & VR experiences you either can’t speak to characters at all, or can only navigate pre-written dialog trees.

Directly speaking to virtual characters – and getting convincing results no matter what you ask – was not thought possible until recently. But a recent breakthrough in machine learning makes this idea finally possible.

In 2017, Google’s AI division revealed a new approach to language models called Transformers. State of the art machine learning models had already been using the concept of attention to get better results, but the Transformer model is built entirely around it. Google titled the paper ‘Attention Is All You Need‘.

In 2018, Elon Musk backed startup OpenAI applied the Transformer approach to a new general language model called Generative Pre-Training (GPT), and found it was able to predict the next word in many sentences, and could answer some multiple choice questions.

In 2019, OpenAI scaled up this model by more than 10x in GPT-2. But they found that this “scaleup” dramatically increased the system’s capabilities. Given a few sentences of prompt, it was now able to write entire essays on almost any topic, or even crudely translate between languages. In some cases, it was indistinguishable from human. Due to the potential consequences, OpenAI initially decided not to release it, leading to widespread media coverage & speculation of the societal impacts of advanced language models.

GPT-2 had 1.5 billion parameters, but in June 2020 OpenAI again scaled up the idea to 175 billion in GPT-3 (used in this demo). GPT-3’s results are almost always indistinguishable from human writing.

Technically, GPT-3 has no real “understanding” – though the philosophy behind that word is debated. It can sometimes produce nonsensical or bigoted results – even telling someone to kill themselves. Researchers will have to find solutions to these limitations, such as a “common sense” mechanism, before they can be deployed in general consumer products.

Elon Musks OpenAI: Roboter lernen in VR vom Menschen

Das gemeinnützige Forschungsunternehmen OpenAI stellt ein System vor, wie man Roboter in der virtuellen Realität trainieren kann. Die Zutaten: Eine HTC Vive, die künstliche Intelligenz OpenAI und ein Roboterarm.

OpenAI: Robotersystem lernt in der virtuellen Realität vom Menschen

Die Grundidee des Systems ist der Gedanke, dass Menschen durch Nachahmung lernen. Warum sollten dann nicht Roboter den Menschen nachahmen? Gedacht, getan und das Ergebnis sieht man im Video. Der Mensch begibt sich mittels HTC Vive in die virtuelle Umgebung und stapelt dort Klötzchen. Die künstliche Intelligenz besteht aus zwei neuronalen Netzwerken: Das erste Netzwerk beobachtet mit seiner Kamera, was in der virtuellen Welt passiert. Der Ansatz dabei ist, dass sich Aktionen in VR wesentlich einfacher und exakter auswerten lassen als in der Realität.

Die gewonnen Erkenntnisse gehen dann an das zweite Netzwerk, das die Imitation übernimmt und anschließend selbstständig ausführt. Nicht nur in der virtuellen Realität, sondern auch in der Realität. So weit, so noch nicht wirklich intelligent. Die Intelligenz kommt ins Spiel, wenn das System versucht, menschliche Aktionen vorherzusagen oder die farbigen Klötzchen richtig zu stapeln, egal wo sie am Anfang liegen.

Das ist auch das Ziel des Experiments: Lediglich eine Vorführung reicht, um dem Roboter einen neuen Trick beizubringen. Und dieser lässt sich dank der AI auch dann ausführen, wenn die Ausgangsbedingungen unterschiedlich sind.

Elon Musk gilt als treibende Kraft hinter dem Forschungsprojekt OpenAI, welches nicht nur Einzelpersonen, sondern auch Unternehmen wie Microsoft oder Amazon unterstützen. Musk gilt als Kritiker der künstlichen Intelligenz und sieht sie als Gefahr für die Menschheit. Das Ziel des Non-Profit-Projektes ist es, Kräfte zu bündeln und die Entwicklung von künstlicher Intelligenz so offen wie möglich zu gestalten und sie nicht den großen Konzernen zu überlassen. Elon Musk verspricht sich zudem von dem Projekt, bereits frühzeitig Gefahren und Möglichkeiten der AI zu erkennen.

Quelle: OpenAI

Der Beitrag Elon Musks OpenAI: Roboter lernen in VR vom Menschen zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

OpenAI is Using VR to Train AI Robots

Much of the attention in the virtual reality (VR) industry is focused on entertainment; creating a realistic world that humans can become immersed in for entertainment. Researchers at OpenAI see another use for the work that has gone into making virtual worlds as real as possible – training artificial intelligence.

Training AI robots is not a simple task. The algorithms that control movement in a robot need to be trained on thousands of real-world examples. OpenAI’s systems work using two distinct algorithms; one which interprets where everything is around it and another which tries to decipher why and how an action occurs. In the case of, for example, picking up and stacking a tower of wooden blocks could take hundreds of repetitions with the algorithms observing and interpreting how the action is performed before it would be able to replicate the motions.

By using a VR simulations of the activity, however, the AI was able to decode the actions needed after just a single recording, and then replicate them using a physical machine. The system was able to copy the movements made by a human without ever having moved the machine before. The developers expect that the VR training system will work well for training with other rigid objects, since VR is able to accurately simulate solid objects well, but is less successful at accurately modelling fluid or flexible objects, though that may change as technology develops.

“Nothing in our technique is specific to blocks,” says Josh Tobin, a researcher at OpenAI, in a video. “This system is an early prototype, that will form the backbone of the general-purpose robotics systems we’re developing here at OpenAI.”

You can view a video demonstration of the OpenAI system below.

VRFocus will continue to report on OpenAI and other innovative uses for VR technology.