WeTransfer has revised a section of its terms of service following criticism from users who believed the company might use uploaded files to train artificial intelligence systems. The updated language, which goes into effect on August 8, led to confusion over whether customer data could be processed by machine learning tools.
The controversy began after users noted that the terms mentioned using content to improve machine learning models involved in content moderation. Some interpreted this as a signal that WeTransfer intended to use their files in AI development or share them with third parties, particularly AI firms. Reactions were especially strong among professionals in creative fields, including those who rely on the platform to transfer artwork, media projects, and other proprietary material.
In response, WeTransfer confirmed that no customer files are processed by machine learning systems or used in any AI-related workflows. The company also stated that no data is shared or sold to outside parties. It clarified that the clause had originally been introduced to cover the future possibility of employing AI to support automated moderation, but no such system had been implemented. The company later acknowledged that the language had caused unnecessary concern and decided to remove references to machine learning altogether.
The situation follows a similar incident involving Dropbox in late 2023, when that company also had to assure users it was not applying AI to their stored data. These repeated misunderstandings suggest persistent concern around how digital platforms handle personal files in the context of emerging AI practices.
Legal experts have warned that even subtle changes in service agreements can expose users to risks, especially when companies operate in data-intensive industries. Privacy advocates often point to the potential for platforms to repurpose stored content under broad or ambiguous clauses, particularly as interest in machine learning continues to grow.
The episode has underscored the need for online services to use precise, accessible language in their policies. For users deeply embedded in such platforms, sudden changes to data terms can leave them without practical alternatives, even if they disagree with the new conditions.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Gen Z Quietly Redefines Phone Etiquette, Leaving Generational Gaps at the Dial Tone
The controversy began after users noted that the terms mentioned using content to improve machine learning models involved in content moderation. Some interpreted this as a signal that WeTransfer intended to use their files in AI development or share them with third parties, particularly AI firms. Reactions were especially strong among professionals in creative fields, including those who rely on the platform to transfer artwork, media projects, and other proprietary material.
In response, WeTransfer confirmed that no customer files are processed by machine learning systems or used in any AI-related workflows. The company also stated that no data is shared or sold to outside parties. It clarified that the clause had originally been introduced to cover the future possibility of employing AI to support automated moderation, but no such system had been implemented. The company later acknowledged that the language had caused unnecessary concern and decided to remove references to machine learning altogether.
- Also read: How to Read a Privacy Policy Without Getting Lost, and What to Look For Before You Tap "Accept"
 
The situation follows a similar incident involving Dropbox in late 2023, when that company also had to assure users it was not applying AI to their stored data. These repeated misunderstandings suggest persistent concern around how digital platforms handle personal files in the context of emerging AI practices.
Legal experts have warned that even subtle changes in service agreements can expose users to risks, especially when companies operate in data-intensive industries. Privacy advocates often point to the potential for platforms to repurpose stored content under broad or ambiguous clauses, particularly as interest in machine learning continues to grow.
The episode has underscored the need for online services to use precise, accessible language in their policies. For users deeply embedded in such platforms, sudden changes to data terms can leave them without practical alternatives, even if they disagree with the new conditions.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Gen Z Quietly Redefines Phone Etiquette, Leaving Generational Gaps at the Dial Tone
