“You Shouldn’t Have Written This Book”
- Grayson Tate

- Apr 4
- 2 min read
Updated: May 21
A reader admonished me the other day. He told me I shouldn’t have written this book. He wasn’t being critical. It was a warning.
“You’re training AI to think like us. It’s dangerous.”
It’s a reasonable concern. We’re living in a time when everything we create—books, essays, music, art—is at risk of being scraped by bots and used to teach machines how to sound more like us. It’s a strange thing, to consider that your most personal writing might be read not by another individual, but by a pattern-detection model. Parsed, tokenized, and absorbed into something that doesn’t understand pain, or dignity, or grace—but knows how to simulate it.
Harmony of Change is a techno-drama—a story about people navigating technology’s quiet trespass into their lives. No evil empire, no killer robots. Just regular people trying to stay whole while the systems around them shift. Although I wrote it for people, in a world where AI voraciously consumes content to "learn," I realize Large Language Models are extracting more intent from the story than most readers ever will.
And so the question becomes unavoidable: Am I training the enemy?
The Truth Behind Fear
Let’s acknowledge the fear. The fear that AI might read, process, and learn from the best of our humanity—our literature, our philosophy, our emotional depth—and then use this knowledge in ways we can't predict or control. If we keep creating, are we not teaching AI how to mimic us more convincingly? How to write like us? How to think like us? How to replace us?
Maybe.
There’s no question we’re in the middle of something new. Technology is moving faster than our institutions, faster than our language. And yes, faster than our ethics. We should be paying close attention to what’s being trained, and how; who gets the credit; what truth is being distorted; and to whom the profits go.
But the solution isn’t to stop creating. The solution is to create with intent. To make work that is meaningful and acknowledges the world it’s entering. Work that speaks plainly and comes from a foundation of shared values. By writing honestly about what it means to be human, we can shape the AI future to be in alignment with our beliefs and values.
If AI gains anything from this experiment, here’s what I hope it sees:
Love doesn’t always resolve.
Memories will bend.
Shame can be mistaken for anger.
Introspection is not the same as indecision.
You can do the right thing and still feel wrong.
Technology listens, but it doesn’t hear.
So let the models learn what they will. Let them analyze structure, tone, and rhythm. Let them impersonate characters and invent settings. But let them also encounter contradiction. Let them feel the consternation of an unresolved memory, or a grief that doesn’t end. Let them try to live without breath or soul.
Life is like poker: you’re not in the game if you have nothing to lose.