Dutch file-sharing platform WeTransfer has sparked uproar after quietly adding language to its terms of service suggesting it could use customer files to train AI models, then swiftly removing the clause following backlash.
What Users Spotted and Why It Sparked Alarm
The controversy erupted in mid-July when eagle-eyed WeTransfer users, including high-profile creatives, flagged an update to the company’s terms of service set to take effect on 8 August 2025. In particular, Section 6.3 introduced wording that granted WeTransfer a “perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable licence” to use uploaded files for operating and developing the service, including, crucially, to “improve performance of machine learning models that enhance our content moderation process.”
To many, that appeared to signal a quiet expansion of rights that could allow WeTransfer to use (or even monetise) user-uploaded content for artificial intelligence (AI) training.
Among the concerned voices was UK children’s author and illustrator Sarah McIntyre, who took to X (formerly Twitter) to say: “I pay you to shift my big artwork files. I DON’T pay you to have the right to use them to train AI or print, sell and distribute my artwork and set yourself up as a commercial rival to me.”
It seems that such concerns weren’t unfounded. The clause appeared to echo patterns seen elsewhere in the tech world, where companies including Zoom, Adobe, Slack and Dropbox have faced recent backlash over vague or overly broad licensing updates connected to AI development. As AI tools become more powerful and accessible, the question of whose data fuels them, and with what consent, has become a flashpoint in digital rights and trust.
Why This Matters for Business Users
For many creatives and businesses, WeTransfer has long positioned itself as a privacy-respecting, user-friendly alternative to more data-hungry services. Its clean interface, strong brand identity, and explicit support for the creative industries made it especially popular with freelancers, studios, and design teams.
However, as a result of this latest incident, that trust now appears to be under scrutiny. If the AI clause had remained, businesses could have faced the uncomfortable possibility that internal documents, pitch decks, drafts, artwork, or sensitive visual assets might be used, not just to train algorithms, but potentially to inform systems well beyond the original upload. Even if restricted to content moderation purposes, the lack of clarity raised red flags.
For example, a design agency transferring client work via WeTransfer might wonder whether its bespoke assets could end up being parsed for machine learning, however indirectly. A photographer might fear her original image files could be used to train image recognition or generation tools. And a marketing firm sharing early brand materials might question what “derivative works” could technically include.
Although WeTransfer insists that no such usage has occurred, the lack of clear technical limitations in the original clause left too much room for doubt.
WeTransfer’s Response
Within days of the backlash, WeTransfer issued a formal press release clarifying its position. It insisted that the controversial clause was a misstep and that the company does “not use user content to train AI models, nor do we sell or share files with third parties.” The company acknowledged that AI had been under consideration “to improve content moderation,” but confirmed that “such a feature hasn’t been built or deployed in practice.”
The statement added: “We’ve since updated the terms further to make them easier to understand. We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension.”
Clause Now Dropped
Following the uproar, it seems that, in an updated version of Section 6.3, the AI-related clause was dropped entirely. For example, the new text grants WeTransfer a royalty-free licence to use content strictly for “operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.” Importantly, it reinforces that users retain ownership and intellectual property rights over their content, and that processing complies with GDPR and other privacy regulations.
What’s Changed and What Hasn’t?
From a legal perspective, WeTransfer’s licensing terms weren’t entirely new. Earlier terms already included broad usage rights necessary to operate the service, such as the ability to scan, index, and reproduce files. However, the new inclusion of AI-specific language, especially amid public concern about AI and data usage, introduced a new level of perceived risk.
As the company explained: “The language regarding licensing didn’t actually change in substance compared to the previous Terms of Service… The change in wording was meant to simplify the terms while ensuring our customers can enjoy WeTransfer’s features and services as they were built to be used.”
Nonetheless, perception matters. For example, the way the AI clause was introduced, without technical limitations, public explanation, or opt-out options, appeared to really undermine confidence at a time when many businesses are increasingly sensitive to data governance.
Broader Industry Fallout and Lessons for Tech Providers
WeTransfer is far from alone in facing scrutiny over AI terms. For example, back in 2023, Zoom had to walk back similar policy updates after suggesting it could use customer audio and video to train its AI models. Dropbox, Slack, and Adobe have all been forced to issue clarifications in recent months after terms of service changes sparked similar fears.
For regulators, the episode highlights ongoing gaps in user protection. In the UK, the ICO (Information Commissioner’s Office) has warned companies that AI development must respect explicit consent, clarity of purpose, and data minimisation, all of which could come under strain when licensing terms are broadly written.
For businesses, the incident is a reminder to read the fine print, especially as more cloud services evolve their models to incorporate generative AI, content filtering, and user analytics.
As an example, a marketing team using file-sharing services or cloud-based creative tools should now routinely assess licensing clauses for AI-related language, even if those features are not currently in use. Procurement teams may also need to establish red lines around AI usage to safeguard proprietary material.
Trust Takes Time to Build And Moments to Erode
Despite WeTransfer’s efforts to clarify and course-correct, replies on social media appear to remain largely sceptical. Some users have suggested the company had been testing the waters for broader AI permissions, only to retreat when the backlash hit. Others have expressed a desire to move to alternatives, such as Swiss-based Tresorit or Proton Drive, that offer end-to-end encryption and stronger privacy guarantees.
While WeTransfer may weather the storm, the event highlights a wider issue for the tech industry, i.e., transparency around AI is no longer optional. As public awareness of AI training practices grows, even small wording changes can trigger major reputational fallout. And for companies built on the trust of creative professionals, that risk is especially acute.
What Does This Mean For Your Business?
For UK businesses and creative professionals in particular, this episode serves as a clear warning that assumptions about how cloud-based platforms handle data can no longer be taken at face value. The practical risk may have been limited in this instance, but the reputational impact is real, and the consequences of poor communication are hard to reverse. For companies that regularly transfer visual, written, or proprietary material via WeTransfer or similar services, it may prompt a review not only of terms and conditions, but of where and how sensitive files are shared in future.
For WeTransfer, the timing could hardly be worse. As demand grows for privacy-conscious alternatives in an AI-saturated market, any perception of blurred boundaries risks handing competitive advantage to rivals positioning themselves as more transparent or security-first. Providers such as Proton Drive, Filestage and Internxt are already responding to this shift, actively marketing their commitment to zero-knowledge infrastructure and end-to-end encryption.
Regulators and legal teams are also likely to be watching closely. The blurred line between operational necessity and expansive licensing is fast becoming a regulatory priority. In the UK, organisations working in regulated sectors, such as legal, health or financial services, may find that contract terms involving generative AI now trigger enhanced scrutiny from internal compliance and external auditors alike.
The broader takeaway from this story is that, as AI becomes more embedded in the digital infrastructure businesses rely on, consent must be granular, wording must be clear, and trust must be continually earned. WeTransfer’s quick backtrack may limit the immediate fallout, but it will likely be remembered as yet another sign of how easily tech companies can alienate users when they fail to communicate transparently, especially when the stakes involve creative ownership, client confidentiality, and commercial value.