top of page

Can You Tell When a Story Was Written by AI?

Updated: May 21

Lately, I’ve been asked this question more and more. People send me articles, short stories, manuscripts—even resumes—asking if I believe it was generated by AI. Perhaps this is my ethos now. Who better than a counterfeiter to spot a counterfeit? Who better than a grifter to identify a con? And that’s the unsettling part. It has become increasingly difficult for me to tell the difference, even with my own words.


My wife will sometimes ask, “Is that you or is that AI?”


Before rushing to take credit, I have to read first. Sometimes I can tell. Other times, I’m not so sure. It’s a strange position to be in. As an author who uses AI openly in my creative process, I’ve spent more time than most analyzing the boundary between human and machine. I’ve edited it, collaborated with it, tried to challenge it. And occasionally, I’ve found myself reading something I wrote and wondering—was that me, or was that the model?


What AI Gets Right

There are many things machines do better than people. Repetitive mundane tasks are one of them, including data entry. Machines are also very good at imitation. They can mimic style and replicate rhythm. They can learn how a sentence is structured, and what patterns create narrative flow.  Combine these elements with training on billions and trillions of our own words and it quickly becomes clear how machines have learned to sound like us.


But they have yet to know the reason for it. They lack the creative urge, that willingness to share vulnerable thoughts. And that’s where the gap still lives. Not in the language, but in its purpose.


The Line Between Authenticity and Imitation

AI can write about grief, but it will never know grieving—the way small talk can feel like betrayal, or how shadows fall on an empty chair. It can write about heartbreak too—possibly better than most people—because there’s a wealth of material to draw from. But it doesn’t understand what it means to second-guess itself on a lonely night. It will never regret that certain something it didn’t say. Emotion is a data point, a statistical probability. You can train a model to produce empathy-shaped sentences, but not to ache, not to wonder. Machines don’t create because they have something to figure out. They create because they were told to.


The Illusion of Depth

If you read enough machine-generated content, you’ll start to notice patterns: everything fits, a little too perfectly. When I read something written by a person, I’m looking for friction not perfection. It could come in the form of a sentence that doesn’t quite make sense but means everything. It could be a metaphor that is foreign but feels familiar. These are signs of a writer taking risks. Not just crafting prose, but working something out on the page. Yes, sometimes it falters. But that’s where authenticity lives.


Creation Requires Risk

The models are getting better. No question, the gap is narrowing. But for now, what separates human storytelling from algorithmic mimicry is taking risks. The risk of being misunderstood. The risk of exposing something personal. The risk of failing—badly, publicly, and without a safety net.


A machine has no skin in the game. It cannot lose face. It cannot regret. That’s why it cannot create in the truest sense. Because creation—real creation—always comes with a chance of loss.


The Sentence That Had to be Written

So, can I tell when a story was written by AI? Not always. Not immediately. But if you read closely, you’ll start to notice when something feels off. When the grief arrives right on cue. When the character grows just a little too efficiently. When the insight seems designed, rather than discovered.


It’s not about what’s present on the page, it’s about what you feel when you’re reading it. The quiet hope that someone out there might understand not just the sentence, but the reason it had to be written.

 

Recent Posts

See All
When Billionaires Build Bunkers

Paul Tudor Jones is not an alarmist. He’s a billionaire hedge fund manager that says AI presents an immediate threat to human safety.

 
 
bottom of page