$1.5 Billion Settlement: What the Anthropic Case Means for AI and Copyright

In September 2025, Anthropic agreed to a settlement totaling USD 1.5 billion. The background was a class action lawsuit brought by authors who accused the AI company of using millions of copyrighted books without permission to train its language model Claude.

The case is considered one of the most important precedents to date in the dispute over AI training data and copyright.

What was the case about, specifically?

The lawsuit was filed in August 2024. The central allegations were:

  • Use of more than 7 million books from unauthorized sources
  • Downloads from piracy websites
  • In addition: purchase, scanning, and subsequent destruction of physical books

Anthropic did not deny AI training as such, but came under pressure because of the origin of the training data.

The ruling: Fair use has clear limits

The presiding judge made the following points clear:

  • ✅ Training AI models with books can qualify as fair use, as it is considered transformative.
  • ❌ Acquiring data via piracy sources is not permissible—regardless of the purpose.

For the first time, a clear distinction was made between use and the origin of the data.

The settlement in numbers

  • Settlement amount: USD 1.5 billion
  • Affected works: approx. 500,000
  • Compensation: around USD 3,000 per work
  • Scope: applies exclusively to past conduct

The settlement closes the specific case, but does not resolve all outstanding legal questions for the future.

Why this matters for companies

The case shows that legally uncertain training data is no longer a theoretical risk, but a massive liability and reputational issue.

For companies, this means:

  • Transparency from AI providers becomes critical
  • Data provenance and training methods must be traceable
  • “Black-box AI” is becoming increasingly risky—especially in Europe

Why Jurilo deliberately takes a different path

For exactly these reasons, Jurilo by Lawise.ai is hosted exclusively in Switzerland and Europe.

Our principles are clear:

  • No use of LLMs trained on allegedly stolen or unlicensed data
  • No training on user or customer data
  • Full alignment with Swiss and European legal standards (DSG, EU-compliant governance)

The Anthropic case confirms one thing:
Legal certainty is not created retroactively—it must be built into the architecture from the start.

Conclusion

The USD 1.5 billion settlement marks a turning point for the AI industry.
It is no longer performance alone that will determine the future of AI systems, but data provenance, governance, and legal integrity.

For Europe and Switzerland, this is not a disadvantage—but a structural advantage.