Activists Question Google’s Commitment to AI Oversight Following Gemini 2.5 Pro Release

Outside Google DeepMind’s London office, a group of protesters staged something unusual, a public courtroom scene with a gavel, a jury, and a symbolic trial. This wasn’t just theater. The group behind it, PauseAI, came with a point to make. They believe Google walked back on its public commitments around AI safety and transparency.


Image: PauseAI / X

Back in 2024, during the AI Safety Summit in Seoul, Google had agreed to involve outside experts in evaluating its AI models. It also said it would share details of that process. But when it launched Gemini 2.5 Pro in April, those steps weren’t visible. The company called the model experimental and skipped over third-party disclosures. A few weeks later, it did publish a safety summary, but the document was light on details and didn’t name the reviewers it mentioned.

For the activists, that response didn’t go far enough. They say it's not just about one model or one company. It's about setting a precedent. If Google can quietly move past a public promise, other AI labs might feel they can too.

Over 5 dozen people joined the protest, some from tech, others from different fields. The group marched through King’s Cross, eventually stopping in front of DeepMind’s offices. Chants broke out. A few passersby watched, some joined in. The message was clear: testing matters more than marketing, and public promises should count for something.

The protest also tapped into broader concerns about AI. People talked about misinformation, job losses, and lack of oversight. But this wasn’t a vague warning about the future. It focused on one clear issue, transparency.

PauseAI says it's now speaking with UK lawmakers. They're trying to raise concerns through political channels, though there's no sign yet of a formal response from Google. The company didn’t offer any public comment when asked.

Among the demonstrators were people who actively use AI tools. One of the organizers runs a software company. He works with products from Google and OpenAI, and he knows how powerful they’ve become. That’s exactly why he’s worried. If these tools are shaping the future, he says, companies shouldn’t be left to police themselves.

This wasn’t PauseAI’s largest protest, and it may not lead to immediate change. But it captured something growing in the background, an unease with how quickly AI is advancing, and how slowly the guardrails seem to be forming.

This article was edited/created using GenAI tools.

Read next: 

Things I Do Not Like About Most Popular WYSIWYG Editors
Previous Post Next Post