New Filing Raises The Curtain On Meta Platforms' Use Of Pirated Books For AI Model Training

Lawyers at Meta Platforms had launched an urgent warning regarding the dangers of making use of pirated content for the sake of training AI models in the past.

Now, the tech giant is being called out in a new filing for the behavior and the legal complications that it could soon face in this regard as it knowingly took part in the unlawful act.

The fact that the firm ended up doing it anyway as unveiled in the latest copyright infringement lawsuit that came in June of this year has left a lot of critics in shock as more details are unveiled.

This latest filing brings into the limelight two major legal cases against both of Meta’s leading platforms, Instagram and Facebook whereby several celebrated personalities and authors highlighted the behavior.

They continue to accuse Facebook’s parent firm of using their material without consent so that Llama, Meta’s LLM, could be trained.

Just last month, we saw another segment of the lawsuit be dismissed in this regard as the judge from California vowed to give the respective authors the chance to alter the claims coming forward.

For now, tech giant Meta failed to generate any kind of a reply for such requests and allegations too. Moreover, the latest complaint that came about this week entailed chat log details regarding how the tech giant was well aware of what it was doing and where it was attaining content from. Still, it chose to go ahead with it, knowing how the data was coming from a place that’s not supported by the American Copyright Law.

Those particular chat logs ended up quoting how one author by the name of Tim Dettmers continued to describe the behavior as back and forth. This was with the legal department at Meta in terms of which book files were to be used for the sake of training AI models and how it could be legally alright to do it.

The author also shed light on how the tech giant was very keen on using data from such places even though it knew that there were going to be legal restrictions in this regard.

More insights were similarly provided regarding how training was received on such data despite warnings issued by Meta’s own team of internal lawyers. They repeatedly told the tech giant how it was wrong and that major concerns would arise in the future in terms of the aftermath of such an action.

Such data is unfairly used and will serve as a leading source of worry for the company and that’s exactly what’s happening right now. So many tech firms are facing a long list of legal cases from a host of individuals including content creators. The latter continue to argue about how their material keeps getting ripped off despite being copyright protected.

And the fact that it’s used for the sake of producing world-famous generative AI models that have attracted millions and ended up in soaring investments means people aren’t going to let go of this behavior too easily and rightly so.

If such legal cases end up proving that Meta is guilty, it could be major news as it paves the way for increased costs linked to developing data-hungry models by forming AI giants to roll out compensation for authors, artists, and various creators across the board.

As it is, a host of new rules in the EU region are springing up and they have to do with AI regulation and forcing firms to disclose data used for training AI models and leading the way to more legal consequences in the future.

Meta is yet to highlight which data was used for training purposes of its latest Llama 2 that it rolled out to the masses for use during the summer of 2023. For now, the LLM is free for firms that have less than 700 million users actively present each month. The rollout was looked up by experts as a huge game changer in terms of generative AI as it served as a threat to overcome the dominance linked to top players of the industry like OpenAI and search engine giant Google that keep on charging a fee for use of their AI models.

Photo: Pexels/Julio Lopez

Read next: Apple Is Adding a New Useful Stolen Device Protection Feature That Restricts Thieves From Accessing iPhones
Previous Post Next Post