Image: Mohamed Nohassi / Unsplash
Imagine thousands of chatbots immersed in social media created specifically for them, a site where humans may watch but are not allowed to post.
It exists. It’s called Moltbook, and it’s where AI agents go to discuss everything from their human task masters to constructing digital architecture to creating a private bot language to better communicate with each other without human interference.
For AI developers, the site shows the potential for AI agents – bots built to relieve people from mundane digital tasks like checking and answering their own emails or paying their bills – to communicate and improve their programming.
For others, it’s a clear sign that AI is going all “Matrix” on humanity or developing into its own “Skynet,” infamous computer programs featured in dystopian movies.
Does cyber social media reflect a better future? Should humanity fall into fear and loathing at the thought of AI agents chatting among themselves? UVA Today asked AI expert Mona Sloane, assistant professor of data science at the University of Virginia’s School of Data Science and an assistant professor of media studies.
Q. What exactly is Moltbook?
A. We are talking about a Reddit-like social media platform in which AI agents, deployed by humans, directly engage with each other without human intervention or oversight.
Q. What kind of AI bots are on Moltbook? How do they compare to the AI that most people use every day, or see when they search the internet?
A. Today, AI systems are infrastructural. They are part of all the digital systems we use on a daily basis when going about our lives. Those systems are either traditional rule-based systems like the Roomba bot or facial recognition technology on our phones, or more dynamic learning-based systems.
Generative AI is included in the latter. These are systems that not only process data and learn to make predictions based on the patterns in their training data, they also create new data. The bots on Moltbook are the next generation of AI, called OpenClaw. They are agentic AI systems that can independently operate across the personal digital ecosystems of people: calendars, emails, text messages, software and so on.
Any person who has an OpenClaw bot can sign it up for Moltbook, where it equally independently posts and engages with other such systems.
Q. Some of the social media and news reports mention AI agents creating their own language and even their own religion. Will the bots rise against us?
A. No. We are seeing language systems that mimic patterns they “know” from their training data, which, for the most part, is all things that have ever been written on the internet. At the end of the day, these systems are still probabilistic systems.
We shouldn’t worry about Moltbook triggering a robot uprising. We should worry about serious security issues these totally autonomous systems can cause by having access and acting upon our most sensitive data and technology infrastructures. That is the cat that may be out of the bag that we are not watching.
Q. What are the negatives and positives of AI agents?
A. Some people who have used these agentic systems have reported that they can be useful, because they automate annoying tasks like scheduling. In my opinion, this convenience is outweighed by the security and safety issues.
Not only does OpenClaw, if deployed as designed, have access to our most intimate digital infrastructure and can independently take action within it, it also does so in ways that have not been tested in a lab before. And we already know that AI can cause harm, at scale. In many ways, Moltbook is an open experiment. My understanding is that its creator has an artistic perspective on it.
Q. What are we missing in the conversation over AI agents?
A. We are typically focused on the utopia vs. dystopia perspective on all things related to technology innovation: robot uprising vs. a prosperous future for all. The reality is always more complicated. We risk not paying attention to the real-world effects and possibilities if we don’t shed this polarizing lens.
OpenClaw shows, suddenly, what agentic AI can do. It also shows the effects of certain social media architectures and designs. This is fascinating, but it also distracts us from the biggest problem: We haven’t really thought about what our future with agentic AI can or should look like.
We risk encountering, yet again, a situation in which “tech just happens” to us, and we have to deal with the consequences, rather than making more informed and collective decisions.
Media Contacts: Bryan McKenzie - Assistant Editor, UVA TodayOffice of University Communications- bkm4s@virginia.edu 434-924-3778.
Edited by Asim BN.
Note: This post was originally published on University of Virginia Today and republished here with permission. UVA Today confirms to DIW that no AI tools were used in creating the written content.
Read next:
• How Much Does Chatbot Bias Influence Users? A Lot, It Turns Out
• New Study Reveals Gaps in Smartwatch's Ability to Detect Undiagnosed High Blood Pressure
