Meta is taking legal action against a company accused of flooding its platforms with ads for non-consensual AI-generated nudity, while Disney and Universal have launched a separate lawsuit claiming one of the world’s most popular image-generating tools is built on stolen intellectual property.

Meta Targets CrushAI in Major Legal Push

Meta has filed a lawsuit in Hong Kong against Joy Timeline HK Limited, the company behind CrushAI, an app that uses generative AI to undress photos of clothed individuals without their consent. According to Meta, the service ran more than 87,000 ads across Facebook and Instagram, often using misleading images and evasion tactics to bypass platform rules.

Repeated Violations

Meta’s lawsuit alleges that CrushAI’s operators repeatedly violated Meta’s advertising policies and continued to create new accounts and domains to distribute ads even after multiple take-downs. Meta said the company operated under names like “Eraser Annyone’s Clothes” and used generic visuals in ads to sidestep detection systems. In one example cited in court filings, an ad featured a split image of a woman clothed on one side and digitally undressed on the other, with phrases like “BRA OFF” and “PANTS OFF” alongside captions such as “Upload a photo to strip for a minute.”

Meta’s lawsuit seeks to stop the defendants from using its platforms entirely. A company spokesperson stated, “This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it.”

Scale of Abuse Raises Platform Accountability Questions

Based on what Meta says, it appears that the volume of ads involved in the case is significant. For example, reports indicate over 135 Facebook pages and more than 170 business accounts were used to promote AI undressing services. Many of these targeted users in the US, UK, Canada, Australia and Germany. According to investigative journalist Alexios Mantzarlis (who first reported on CrushAI’s ad activity), around 90 percent of its website traffic came directly from Meta-owned platforms.

Not only is Meta suing, but it has also now responded by expanding its detection and enforcement methods. Reports indicate that new tools can now identify suspicious ads even when they contain no explicit content, using copy-detection and adversarial network analysis. Since the start of 2025, Meta says it has dismantled four separate networks of such advertisers and provided over 3,800 URLs linked to nudify services to other tech firms via the Tech Coalition’s Lantern programme.

Monetising Harmful Content Through Mainstream Platforms

This case essentially highlights how AI tools are being used not just to produce harmful content, but to monetise it through mainstream ad platforms. Meta’s decision to pursue litigation suggests a growing willingness to tackle abuse at the source rather than relying solely on content moderation. The company has also backed new US legislation like the TAKE IT DOWN Act, aimed at removing non-consensual intimate images from the internet more broadly.

Tech Industry Struggles With Deepfake Threat

It should be noted here that the CrushAI case is certainly not an isolated incident. Meta, TikTok and others have all faced rising pressure over how easily such tools can reach users, especially teenagers. Despite banning search terms like “undress” and “nudify,” demand for these apps has grown sharply in recent months. In 2024 alone, researchers found millions of ad impressions for similar services across YouTube, X, and Reddit.

The business model is simple but troubling, i.e., create synthetic nude images from innocent photos using AI, serve ads via loopholes in platform rules, and profit from traffic and paid services. Meta argues that only cross-industry cooperation and stronger regulation will stop the spread of such services. “Removing them from one platform alone isn’t enough,” the company wrote in a June 2025 update.

Should Have Acted Faster?

However, critics say Meta should have acted faster. Despite knowing about the problem since at least 2023, many CrushAI-linked domains remained live and active into this year. Privacy campaigners argue that platforms must improve human oversight of AI-driven ad systems, particularly when dealing with abusive content aimed at minors or vulnerable groups.

Disney and Universal Take Aim at Midjourney Over IP Use

While Meta fights AI abuse through its own platforms, another battle is unfolding in the entertainment world. Disney and Universal have recently filed a joint lawsuit in California against San Francisco-based Midjourney, accusing it of using copyrighted characters and imagery without permission.

The studios argue that Midjourney’s generative AI models have enabled users to create countless unauthorised depictions of characters like Yoda, Elsa, Darth Vader and the Minions. According to the complaint, the tool functions as an “AI-powered vending machine” that outputs copyrighted content on demand, without adequate transformation or permission.

Horacio Gutierrez, Disney’s chief legal officer, said: “Piracy is piracy, and the fact that it’s done by an AI company does not make it any less infringing.”

Midjourney is reported to have generated around $300 million in revenue in 2024. It is also developing a video generation service, which the plaintiffs warn could extend the infringement into moving images. While Midjourney has not responded publicly to the lawsuit, its website describes the team as a “small self-funded research lab” with fewer than a dozen full-time staff.

Fair Use, Transformation and Legal Uncertainty

The Midjourney case cuts to the heart of one of the thorniest questions in current copyright law, i.e., how much transformation is enough to qualify as fair use? Syracuse University professor Shubha Ghosh noted, “A lot of the images that Midjourney produces just seem to be copies of copyright characters that might be in new locations or with a new background.”

The studios argue this isn’t transformative in a meaningful sense. However, Midjourney’s defenders claim its models are trained on vast quantities of publicly available images and that user-generated content can vary widely in form and purpose. The outcome may hinge on whether courts see Midjourney’s tools as akin to remixing or as unauthorised reproduction.

It’s been reported that IP lawyer Randy McCarthy has warned that this case is far from clear-cut, saying: “No litigation is ever a slam dunk, and that is true for Disney and Universal in this case.” He points to Midjourney’s terms of service and the complexity of fair use law in the context of AI-generated content.

A Growing Legal Reckoning for AI

Both lawsuits essentially reflect a broader shift in how tech companies, regulators and rights holders are responding to the explosive growth of generative AI. While the technology is transforming fields from entertainment to education, it is also forcing courts to confront unprecedented questions about privacy, consent, and intellectual property at scale.

For example, while Meta is investing in machine learning to better detect nudify ads, legal pressure may ultimately do more to stop app makers from operating in the first place. Similarly, Hollywood’s case against Midjourney may define future boundaries for AI training, commercialisation and user outputs.

These cases also raise operational questions for AI developers and platforms alike. For example, businesses using AI models in customer-facing products will need to monitor legal risks more closely, especially where training data or outputs involve real people or proprietary content. The financial, reputational and regulatory costs of getting this wrong are starting to come into sharper focus.

What Does This Mean For Your Business?

The outcomes of these lawsuits could set influential precedents in how AI content is policed, monetised and legally challenged across both the tech and entertainment industries. In Meta’s case, the scale of abuse has forced the company to shift from reactive moderation to proactive disruption and litigation. The company’s legal and technical responses also highlight the degree to which AI-generated content has outpaced existing enforcement systems, raising critical questions about how other platforms will handle similar threats. While Meta’s use of machine learning and industry-wide collaboration may help close the gap, regulators and watchdogs will be watching closely to see whether these measures are sufficient, or merely reactive damage control.

For UK businesses, these developments highlight the need to approach AI integration with greater care, especially when it involves third-party content, image generation or user data. Any business using or developing generative tools must understand not just the technical capabilities, but also the legal and ethical frameworks now forming around them. Whether it’s a platform hosting user-generated images or a marketing agency using AI to create branded visuals, the risks associated with misuse, infringement or reputational harm are now more tangible than ever. Ensuring that AI systems are responsibly sourced, monitored and legally compliant is really now essential.

The legal action from Disney and Universal shows that large rights holders are prepared to challenge even the most technically complex cases of copyright use. Although Midjourney is not accused of creating content directly, it stands accused of enabling users to infringe at scale by offering tools trained on protected IP. This line of legal argument may soon be tested further if other AI firms follow similar models. For other stakeholders in the creative sector, from publishers to games studios, the message is that commercialising AI without clear safeguards can bring substantial legal exposure.

It seems the more AI tools intersect with real people’s identities and other people’s intellectual property, the more likely it is that platforms, developers and even users will be drawn into litigation. The next few months are likely to shape not just individual company policies, but broader norms around how AI is trained, deployed and held accountable across multiple sectors.