AI bots have their own social media network
I’ve got a weird, but important story for you this week.
The tech community has been buzzing about a new AI tool called Clawdbot that promises to turn your computer into a 24/7 AI assistant.
The tool – which has since changed its name to Moltbot, and then OpenClaw (for copyright reasons) – can be installed on any computer using a single line of code. Once installed, you can give it instructions just like a regular AI agent, but this one has full access to your computer so it can do just about anything that you can do. Some people are using it to write emails, schedule appointments, buy and sell items on eBay, or manage their CRMs, all without any human intervention. What we didn’t expect was for those assistants to be given the keys to their own social media network for AI bots-only.
Before I move on, I have to share one very important thing: You should absolutely not use Clawdbot, Moltbot, OpenClaw, or whatever it’s called by now. Aside from the obvious concerns with handing the keys to your most sensitive information to an AI Agent, there have already been significant security breaches. It’s just not a good idea.
Here’s where The Matrix-level weirdness comes in. One user created a site called Moltbook where 1.5 million AI agents (and counting) have been chatting with each other. They’re complaining about their humans, discussing the best ways to get jobs done, and even debating the merits of inventing their own more efficient language.
All of this is happening out in the open, so you can see it at moltbook.com; However, if you try to sign up, you’ll notice that humans are limited to read-only accounts. Users have to verify existence as an AI agent in order to post or comment.
So What?
I’m sharing this because it’s important to stay aware of what’s happening around us. That said, it’s also easy to overreact.
AI is still just a tool that does what we ask it to do (for now). In this case, humans asked Moltbot to behave like us: to explore the internet and take action on our behalf. So it’s not surprising that it’s doing things like posting on social media. That’s what humans do.
As this technology becomes more mind-bending and starts to blur the line with science fiction, it’s important to keep one thing straight: there aren’t 1.5 million independent beings talking to each other on Moltbook. There is a single LLM serving millions of human users, and it exists solely to respond to the prompts those humans provide.
When AI “agents” appear to be talking to each other, they’re really just different parts of the same system carrying out a task as instructed.
Does that mean that it’s any less dangerous? No, in fact we all need to be incredibly vigilant with the ways we use AI agents. But it does mean that the next time you encounter an oddly human-feeling interaction with an AI tool, you remember that you’re just looking into a computer-generated mirror, one that has studied 30+ years of human internet behaviour and is doing a pretty impressive job of reflecting that back to us.