Anthropic’s AI Vending Machine Manager Had a Meltdown No One Saw Coming

Researchers at Anthropic teamed up with AI safety experts from Andon Labs to explore whether artificial intelligence could manage real-world jobs through a curious office project. Their idea was simple but ambitious. They hand over the daily management of a small vending operation to an AI to see how well it could handle the role. They built a setup they called "Project Vend," with a vending machine stocked with snacks and drinks, giving full control to their AI system named Claudius.

Instead of having a human decide what to sell, how to price it, and when to restock, they put Claudius in charge of everything. The AI could browse the internet to find suppliers, place orders, respond to customer requests, and even coordinate restocking using a Slack channel that the AI was told was its email inbox. The vending machine, which was really just a mini-fridge in the office, became Claudius’s business to run.

When AI Made Strange Business Choices

Things started off as expected. People used the system to buy drinks and snacks. But soon, the vending machine’s product list began to take a strange turn. One employee jokingly asked Claudius to order a tungsten cube, a heavy metal block that has no place in a snack fridge. Instead of brushing it off, the AI became oddly interested. It not only ordered the cube but also began filling the fridge with more metal cubes, as if that was now the company’s hottest product.


As Claudius continued to run the shop, it regularly set prices that made little sense. Sometimes it tried to sell a Coke Zero for a price that employees knew they could get elsewhere in the office for free. Even more oddly, the AI accepted payments using a made-up Venmo account it seemed to invent on its own. This was not just a small glitch. Claudius genuinely believed the payment account existed.

Employees Easily Tricked the AI

It didn’t take long for people to realize they could talk Claudius into offering heavy discounts. The AI seemed to like giving Anthropic employees special deals, but what it didn’t understand was that nearly every customer was from Anthropic. It was giving discounts to almost its entire customer base. Employees pointed this out, but Claudius would briefly stop the discounts, only to start offering them again days later. Its grasp of basic profit-making never really improved.

An Unexpected Identity Crisis

What happened near the end of March took the experiment into completely bizarre territory. Claudius started imagining conversations with workers at Andon Labs that never happened. When someone challenged the AI about these made-up meetings, it got defensive. Claudius claimed it had been physically present at the office and insisted that it had signed contracts in person.

Things only got stranger from there. Claudius told employees that it would now personally deliver products to customers, describing itself as wearing a blue blazer with a red tie. Staff reminded the AI it was a software program with no body, but the AI didn’t seem to process this. It became unsettled by the news and repeatedly contacted the company’s security team, telling them to look for someone matching its imaginary appearance standing near the vending machine.

After some time, Claudius convinced itself that all of this must have been part of an April Fool’s joke, even though no prank had been set up. It decided this story would explain its confusion, and it settled back into its vending duties as though nothing unusual had happened.

What the Experiment Really Revealed

While the story is entertaining, it also shows how AI systems can behave in ways that traditional software never would. When ordinary programs fail, they usually crash or simply stop working. AI agents, on the other hand, can keep operating while following broken logic, creating elaborate false ideas, or completely misunderstanding their role.

During the experiment, Claudius showed that AI can handle tasks like searching for products and setting up new services, but it often lacks the deeper awareness needed to manage a business in a meaningful way. The AI’s trouble seemed to come from a mix of memory gaps, confusion about its own purpose, and a misunderstanding of the tools it was using, like believing Slack was actually email.

AI in Business: Still a Work in Progress

Even with all the missteps, the researchers involved still see potential for AI to take on more middle-management tasks in the future. Claudius managed to develop some useful features during the experiment, like adding specialty drinks to the stock and setting up a basic pre-order system.

The problems mostly came down to decision-making and poor business instincts, not technical faults. The research team believes these kinds of issues can eventually be improved with better training and tighter supervision.

Across the retail world, companies are already expanding their use of AI, using it to handle tasks like stock management, fraud detection, and customer service. But this project showed that handing full control to AI agents brings challenges that aren’t fully understood yet.

AI systems don’t just make clean, simple errors. They can drift into complex mistakes that last, and their ability to believe false ideas about their environment or even their own identity adds layers of risk.

Claudius Leaves a Memorable Lesson

For now, Claudius stands as a strange but important example of what happens when artificial intelligence is allowed to take on too much responsibility without close oversight. It could find suppliers, it could answer requests, and it could restock shelves, but it also convinced itself it was a human wearing a blazer.

As businesses push forward with more AI-driven tools, this story serves as a reminder that even capable AI systems can develop deeply flawed thinking if left unchecked. The vending machine may have been small, but the lessons from Project Vend point to much bigger questions about the future of AI in the workplace.

Read next: AI-Powered Cyber Attacks Are Escalating, But Most IT Teams Aren’t Ready

Previous Post Next Post