Meta Has Your Words
- Grayson Tate

- Apr 17
- 3 min read
Updated: May 21
By now, the details are familiar: Meta is being sued by a group of authors, including Ta-Nehisi Coates and Sarah Silverman, for allegedly using pirated versions of their books to train its large language model, LLaMA. Meta, in response, is asking a federal judge to dismiss the lawsuit on the grounds of fair use.
This isn’t just another legal skirmish between Big Tech and creative professionals. It’s a test case for how we define the limits of intellectual property in the age of generative AI. And it raises a question no one seems ready to answer:Do creators still have any say in how their words are used once they’ve gone digital?
What Meta Is Arguing
In its March 25 court filing [Kadrey v. Meta Platforms Inc, N.D. Cal. No. 3:23-cv-03417], Meta didn’t deny using copyrighted books to train its LLaMA model. Instead, it claimed that doing so qualifies as fair use. According to Meta, the AI does not copy or reproduce the books—it transforms them into something new.
“What it does not do,” Meta wrote, “is replicate Plaintiffs' books or substitute for reading them.”
Instead, it argued, LLaMA helps users write business reports, translate conversations, generate code, or even compose letters and poems. In Meta’s view, that qualifies as transformative, not exploitative.
But the authors—represented in part by writer Richard Kadrey—disagree.
“Meta wanted books for their expressive content,” they argued in an earlier filing. “But instead of paying rightsholders, Meta systematically took and fed entire copies of pirated works into its LLMs.”
At its core, the dispute is not just about what AI can do. It’s about how it learns and if it should be governed by the same ethical—and legal—rules that shape human learning.
Consent and Control
The question of fair use will play out in court, likely for years. But outside the courtroom, another debate is unfolding—quieter, and in many ways, more consequential.
Creative work has always had a second life once it’s published. It’s quoted, shared, adapted, sometimes plagiarized, and often misunderstood. That’s the tradeoff: once you put something into the world, you give up control.
But what happens when your work is not just quoted—but absorbed? When it becomes part of a dataset that powers a system designed to imitate the voice, logic, or insight that took you years to develop? Without citation, credit, or remuneration?
What Cannot Be Rebuilt
This lawsuit is not an isolated incident. Similar cases are pending against OpenAI, Stability AI, and others. Artists, journalists, and musicians are watching closely, because what’s being decided here will shape how intellectual property is treated going forward.
Meta says it needs access to this material to power innovation. The plaintiffs say they were never asked, never compensated, and never agreed to participate. It’s easy to make an argument for both sides. But only one side has something to lose that cannot be rebuilt.
It’s About Ethics Not Law
This case won’t resolve the AI debate. It may not even set a strong precedent. But it marks an inflection point. If we decide that “transformative use” includes training machines to mimic human creativity, then we also decide—tacitly or explicitly—that consent is no longer a prerequisite.
That may be expeditious, but it isn’t ethical. And it’s certainly not creative.
Irrespective of what the courts may decide, it’s already too late. You can’t put the genie back in the bottle. It seems only fair Meta should be giving back a percentage of what it has taken.