This is an updated post in the Teaching AI Ethics series, originally published in 2023. Given the explosive developments in AI and copyright over the past two years – including major court cases, government decisions, and the first billion-dollar settlements – it felt essential to revisit this intermediate-level ethical concern. For the previous updated post on Truth, click here.
The concepts in this post remain complex, but now we have actual legal precedents, policy decisions, and real-world consequences to examine. The information is both more accessible (we have verdicts, not just speculation) and more philosophically prickly (the questions have only deepened as the technology has proliferated).
Copyright remains a hugely contentious aspect of Generative Artificial Intelligence, but in 2025, we’re no longer debating hypotheticals. We’re watching the conclusions of billion-dollar lawsuits, seeing governments choose sides, and witnessing the creative industries fight back against Big Tech. As multimodal GenAI continues to advance and produce increasingly sophisticated outputs in text, image, audio and video, the stakes have continued to rise. This post explores where we are now in the ongoing battle between innovation and creative rights.
Cover image: Catherine Breslin & Rens Dimmendaal / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
The legal landscape has shifted… Sort of
I’m starting with the big picture because the legal situation in 2025 looks quite different than it did in 2023, yet somehow equally uncertain. For a quick scan of how the situation has evolved, look at these developments from just the past year:
- Stability AI largely wins UK court battle against Getty Images
- Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit
- Australia Rejects AI Data Mining to Protect Creatives
- UK consultation on copyright and artificial intelligence: Walking a fine line
- Copyright Office Issues Key Guidance on Fair Use in Generative AI Training
We now have actual court verdicts (though they’re contradictory), government policies (pulling in opposite directions), and settlements totalling billions of dollars. And yet, despite all of these changes since 2023, the fundamental questions remain unresolved.
What’s Changed?
So, where are we now compared to 2023?
The main shift is that what were once hypothetical legal arguments have become real court cases with real consequences. Getty Images’ lawsuit against Stability AI – one of the most watched cases in recent months -concluded in November 2025 with a split decision that seems to have satisfied few and resolved almost nothing. Getty dropped its main copyright claims during trial after admitting that Stable Diffusion’s training happened outside the UK.
The judge ruled that the AI “doesn’t store or reproduce” copyrighted works, dismissing the copyright claims while finding limited trademark infringement. This decision is directly opposed to the general emerging consensus that image generation models can indeed store the materials they are trained on, in the algorithmic representation of the weights in the model. Legal experts noted the UK was left “without a meaningful verdict on the lawfulness of an AI model’s process of learning from copyright materials.”
Meanwhile, Anthropic agreed to pay $1.5 billion in August 2025 to settle a lawsuit with authors: the first major AI copyright settlement. The case revealed an important distinction: a judge ruled that training AI on legally purchased books was fair use, but using millions of pirated books from “shadow libraries” (illegally ripped ebooks, often downloaded via torrenting) was not. This is important because it suggests that how you acquire training data may matter more than whether you use copyrighted material at all.
But it gets even more cloudy: different jurisdictions are taking radically different approaches. The Australian government has just rejected a proposed text and data mining (TDM) exception in October 2025, meaning AI companies cannot use copyrighted Australian content without permission. The exemption, which was proposed in a report by the Productivity Commission, was broadly met with criticism from the creative sector.
The Copyright Agency in Australia had this to say about the proposal:
The push for a TDM exception is primarily from multinational tech companies. It is part of a global strategy. For example, the creatives industries in the UK are currently vigorously opposing a TDM exception that would cover for AI training.
Copyright Agency
The creative sector celebrated the ruling out of the exemption as a major victory, with music industry organisation APRA AMCOS warning that without protection, 23% of creators’ revenue – over $519 million – would be at risk by 2028.
The UK, by contrast, launched a consultation proposing an EU-style “opt-out” system where copyrighted works can be used for AI training unless creators explicitly reserve their rights. The consultation received over 11,500 responses and closed in February 2025, with the creative industries – and artists including Sir Elton John – largely opposing it and tech companies supporting it.
And in the United States? Despite an administration that is very favourable to tech, there are more than 50 copyright lawsuits currently pending against AI companies. In May 2025, the U.S. Copyright Office released comprehensive guidance suggesting that current AI training practices likely don’t qualify as fair use when they compete with or diminish markets for original human creators, especially in fields like illustration, voice acting, and journalism.

The Creator’s Dilemma
There’s a fundamental tension emerging that some call “the creator’s dilemma”: under current US law, you generally can’t copyright AI-generated content, but others may potentially use your copyrighted works to train their models.
The Copyright Office’s January 2025 report confirmed that AI outputs qualify for copyright only where there’s sufficient human authorship. Using AI as an “assistive tool” (like spell-checking or removing objects from images) doesn’t disqualify your work. But simply providing prompts to generate content is not enough to qualify for copyright. The human contribution must involve “determining sufficient expressive elements.” Similar cases have occurred in China, with courts ruling AI-generated images cannot be copyrighted as they do not have “significant human input”.
This creates an impossible situation for many creators: if they opt out of having their work used for AI training, they protect their past work but may be disadvantaged as the technology advances and becomes more and more integrated into industry standard software like Adobe Photoshop. If they don’t opt out, their style and techniques become free training data for systems that could replace them. And anything they create using AI tools may not even be copyrightable, meaning others can freely copy it.
Some argue this is acceptable because AI benefits everyone through increased productivity and innovation. Some research suggests that less restrictive copyright law correlates with more AI research output, patents, and new business. But some are rightfully asking the question: without fair compensation for artists, will we have any high-quality, human-made data in the future? If the best creators stop creating because AI has made it uneconomical, what will AI companies train their next-generation models on?

Beyond Images: Music, Text, and Cultural Sovereignty
While visual art dominated the conversation in 2023, by 2025 every creative field has found itself in crisis.
In music, AI can now replicate artist voices with stunning accuracy. The legal questions keep coming: Can you copyright a voice? Can AI-generated songs infringe on a musical style? What happens when AI trained on an entire discography releases “new” work in the artist’s style – perhaps even after the death of the artist? Whilst major music production companies like Universal have been suing AI generators like Udio, they have now begun to partner with them and make acquisitions. What will the industry look like in another two years’ time?
In writing, authors are watching AI systems trained on their books produce content that competes directly with them. The New York Times is still involved in suing OpenAI and Microsoft for using millions of their articles, arguing the AI creates a “market substitute” for their journalism. Meanwhile, Perplexity AI’s “retrieval augmented generation” system not only trains on copyrighted news but actively fetches current articles, with the company literally telling users to “skip the links” to the original sources.
Perhaps most significantly, Indigenous communities have raised alarm about cultural appropriation through AI. In Australia, 89% of Aboriginal and Torres Strait Islander people surveyed believe AI has potential to cause cultural appropriation, and 67% agree it makes protecting cultural rights harder. This is more than economics: it’s cultural sovereignty and the ability of communities to control their cultural expressions.
Case Study: Getty Images v. Stability AI
In January 2023, Getty Images filed one of the first major copyright lawsuits against an AI company, accusing Stability AI of scraping 12 million of its copyrighted images to train Stable Diffusion. The case went to trial in June 2025 at London’s High Court, and the verdict came in November 2025.
Getty actually dropped its primary copyright infringement claims during trial, because Stability AI successfully argued – and provided witness testimony – that all training occurred on US-based Amazon servers, not in the UK. Since copyright is territorial, and UK law only applies to acts within the UK, Getty’s main case collapsed.
Getty pivoted to arguing for “secondary infringement”: that even if training happened elsewhere, offering Stable Diffusion to UK users amounted to importing infringing copies. Justice Joanna Smith rejected this in November 2025, ruling that Stable Diffusion “does not store or reproduce any Copyright Works (and has never done so).” The judge noted there was “very real societal importance” in balancing creative and tech industries but could only rule on the “diminished” case that remained.
Getty won a narrow trademark claim because some AI-generated images reproduced recognisable Getty watermarks. But on copyright – the core issue – they lost. Getty is now using findings from the UK case in their ongoing US lawsuit, where they refiled in August 2025 in San Francisco federal court.
As a case study in how copyright cases have changed since 2023, the Getty vs Stability case provides some interesting materials. First, where AI training physically occurs matters enormously. Second, legal frameworks designed for physical piracy don’t map cleanly onto AI systems that learn patterns rather than storing copies: contrary to the judge’s decision, there is a growing scientific consensus that GenAI models do store the data they are trained on, albeit differently to physical or other digital copies. Third, these cases are extraordinarily complex, factually dense, and expensive: Getty was reportedly seeking up to $1.7 billion in damages before dropping most claims.
The case also reveals something troubling: Getty’s pleadings were described by the judge as “inchoate” and “inferential” because they couldn’t prove what happened without Stability AI’s internal documentation. When Stability witnesses testified that no work occurred in the UK, Getty had little to counter with. This suggests that without aggressive transparency laws, proving copyright infringement in AI training may be nearly impossible.
Most importantly, legal experts seem to agree the case leaves the fundamental question unanswered. As one intellectual property partner put it: “The decision leaves the UK without a meaningful verdict on the lawfulness of an AI model’s process of learning from copyright materials.”
We’ve had our major test case. We’ve seen millions spent on litigation. We still don’t know if training AI on copyrighted works is legal.
Teaching AI Ethics
Each of these posts offers suggestions for incorporating AI ethics into your curriculum. Every suggestion in this article includes linked resources – articles, reports, videos, or policy documents from 2024-2025.
- Legal Studies: Compare how Australia, the UK, and the EU are handling AI and copyright differently. What do these different approaches tell us about how legal systems balance innovation and protection?
- English and Literature: Read about the $1.5 billion Anthropic settlement with authors. How does AI challenge the Romantic notion of the solitary creative genius? What happens to authorship when creation becomes collaborative: human and machine?
- Computer Science: Examine the US Copyright Office’s May 2025 guidance on AI training and fair use. How might you design an AI training system that respects copyright? What technical solutions could provide transparency?
- Philosophy: Explore the concept of “the creator’s dilemma”—creators can’t copyright AI outputs but their work trains AI. What ethical frameworks help us think about balancing individual rights against collective benefits?
- Business and Economics: Analyse research showing that restrictive copyright correlates with less AI innovation, but without compensation, we may lack quality training data tomorrow. How do we optimise for both short-term innovation and long-term creative production?
- Media Studies: Investigate the Perplexity AI lawsuit where news publishers accuse AI of both training on and actively fetching their content. How does AI challenge the business model of journalism? Who benefits and who loses?
- Visual Arts: Watch how the Getty Images case concluded without resolving whether AI training on images is lawful. As an artist, how would you respond to current legal uncertainty: opt-out, license your work, or something else?
- Music: Consider APRA AMCOS’s warning that 23% of musicians’ revenue could be at risk from unlicensed AI by 2028. How do we value music in an age when AI can generate unlimited songs?
- Social Studies: Explore why 89% of Indigenous Australians believe AI increases cultural appropriation risks. What obligations do AI developers have to Indigenous communities? Is cultural sovereignty possible in the age of web scraping?
Note: This is a teaching resource exploring AI ethics. For current legal advice on AI and copyright in your jurisdiction, consult a qualified intellectual property lawyer.
The next post in this series will explore AI and privacy, examining how the data AI companies collect goes far beyond just training material. Join the mailing list for updates:

Leave a Reply