Privacy Group Noyb Challenges Meta’s AI Training Plans Citing GDPR Violations

In Europe, Meta finds itself pulled into another storm. This time, it is about data. The company wants to take millions of posts and comments from users and use them for training its AI systems. But in Austria, privacy group noyb (None Of Your Business) is pushing back hard.

The man behind noyb, Max Schrems, already known for challenging big tech before, has written to Meta. He tells them: if you don’t change your plan and let users clearly say yes or no first, we will take you to court. And not just any court fight. They talk also of class-action, which can cost Meta very big money.

Before, in last year June, this same group sent 11 complaints. And as a result Meta stopped its AI training in the EU for some time. But now, the company again says it wants to start using the data. This has brought the same group back on its feet.

Meta says they have reason under GDPR law. That reason is called "legitimate interest". It means, they believe they don’t need permission for using the data. But noyb says this logic doesn’t stand. They believe this interest is not enough to ignore user rights.

The group points out that Meta tried the same trick before when it came to ads. They used this "interest" story back then too, but had to stop it in 2023. After legal cases, Meta agreed to ask users before using their data for advertising. So why is AI training any different now?

Meta tries to explain it like this — if only ten percent of people agree, that won’t be enough. They say their AI will not learn local culture well without wide data. But noyb answers that other AI companies like OpenAI and France’s Mistral do well without having access to people’s private social content. They don’t use data from social networks at all and still their AI is very smart.

At Meta, they see things in another way. A spokesperson claims that some activist voices are only slowing down new ideas. The company argues it’s being more honest than many other AI firms, and says other companies also depend on similar GDPR rules for training their systems.

But the legal danger looks real. Noyb says it may ask courts to stop Meta right away. They are also looking at asking for payment for all users harmed. With 400 million users in the EU, the cost could touch more than €200 billion (or $224 Billion USD). Other groups across Europe may follow too, which will only grow the pressure.

Schrems finds it strange that a company this big would take such a risk only to avoid one simple thing — asking permission.

This conflict is not unique to Meta. Other tech giants like Google, Apple, and Twitter have faced similar scrutiny, prompting updates to their data use policies to meet evolving privacy regulations.

Notably, competitors in AI development such as OpenAI and France’s Mistral choose to exclude private social media content, relying on publicly available or synthetic data instead. This difference shapes both regulatory attitudes and competitive dynamics.

As privacy demands grow stricter, platforms must enhance transparency and obtain clear user consent to avoid costly legal battles and protect their reputations.

Emerging techniques like federated learning and anonymized data processing offer paths forward that respect privacy while supporting AI innovation.

Advocacy groups like noyb play a key role in driving these industry changes, signaling a shift toward stronger data ethics and governance.

This episode reflects a broader turning point: tech companies must align AI data practices with legal standards and user rights to sustain innovation and maintain market leadership.


Image: DIW-Aigen

Read next: 

• Digital Ads Face Overhaul After Belgian Ruling Invalidates Transparency and Consent Framework

• Banned Without Warning: Pinterest Apologizes Late, Users Still Distrust Platform
Previous Post Next Post