When Billionaires Build Bunkers
- Grayson Tate
- May 9
- 4 min read
Updated: May 21
Paul Tudor Jones is not an alarmist. He’s a billionaire hedge fund manager (and founder of the Robin Hood Foundation) with enough access to walk into a room of 40 global leaders—household names in science, politics, finance, and tech—and ask them a simple question:
Is AI a threat to humanity?
The answer, according to Jones? A resounding yes. Not just because AI could disrupt jobs or reshape economies. But because it might, in the next two decades, wipe out half of humanity. And these weren’t sci-fi screenwriters saying it. They were the AI model builders themselves.
A 10% Chance of Apocalypse
In a private (and undoubtedly lavish) retreat, the group was asked to respond to this proposition:
“There’s a 10% chance AI will kill 50% of humanity in the next 20 years.”
All four leading developers of the most recognized models agreed.
That's the kind of probability that would ground an airline. But in AI, it’s treated as a footnote.
Warnings about artificial general intelligence (AGI) are not new. For years, prominent researchers and tech leaders—including Elon Musk, Geoffrey Hinton, and Stuart Russell—have expressed concern over the pace and direction of AI development. No meaningful international agreement exists to meter the pace. No shared safety standards have been universally adopted. And many of the individuals raising alarms are also the ones funding or building the technology. We’re no longer asking if something will go wrong, we’re calculating the odds like it’s weather.
Ant Builds God
So, if the danger is truly imminent why is no one willing to step in? The reality is that the competition between tech companies is too intense. The geopolitical stakes—particularly between China, Russia, and the U.S.—are too high. There’s no global pause button. Even Musk, who called for a six-month moratorium back in 2023, is still pressing forward. It has all the earmarks of an arms race, only this time the weapon is smarter than the people designing it.
Artificial General Intelligence isn’t a fantasy anymore. It’s six months to five years out, depending on who you ask. And once we hit AGI, we sprint—possibly in minutes—toward Artificial Super Intelligence.
Think: ant builds god.
Jones, half-joking, has said it may take “an accident where 50 to 100 million people die” before the world takes these risks seriously. It’s the kind of comment that sounds absurd—until you realize he wasn’t joking.
What AI Learns Depends on What We Feed It
Amid this urgency, one feature of AI deserves more attention than it receives: the source material. Large language models are trained on vast quantities of human-generated data—books, articles, message boards, social media posts, code repositories, and more.
These systems do not think. They mirror. And increasingly, they are modeling their behavior on what we collectively produce. This raises a question rarely asked outside technical circles: What are we teaching AI to become?
If training data is the foundation of model behavior, then the tone, quality, and intent of those data sets should be treated as a moral imperative—not just technical optimization. The training diet for these models includes everything from scientific journals to Reddit rants, from classic literature to internet junk. AI is not just trained on what we say, but how we say it: our logic, our biases, our impulses, our empathy—or lack thereof.
Writing as Calibration
AI isn’t thinking for itself, at least not yet. It’s echoing us, which means we are the teachers. We drive the curriculum. In this light, the role of authors, artists, and public intellectuals shifts subtly but significantly. The audience is no longer only human. The essays, stories, and reflections we publish may be ingested not just by readers, but by learning systems. And while that doesn’t change the job description, it does expand the context. Writing, in this moment, is literally shaping our future.
We already know AI is outpacing regulation. But culture still gets to decide what’s celebrated. That’s where authorship matters most—not as resistance, but as a design. If AI is feeding off our thoughts and information, we should be feeding it the good stuff from the local farmer’s market. Not because we owe the machine anything, but because we owe it to ourselves.
What if the author’s job now isn’t just to entertain or inform, but to nourish? If AI is feeding on our intellectual exhaust, then we need to be giving it more than clickbait and outrage. We need to be writing stories that stretch empathy, essays that model reason, dialogue that invites complexity. We need to write like we're shaping a mind, because we are.
Life Is No Retreat
When Paul Tudor Jones asked one of the unnamed AI model leaders what he was doing in the face of imminent threat, it was more than a little disconcerting that the answer was he’s buying land, raising cattle and stock-piling provisions—essentially building a billionaire’s bunker as his backup plan. When the people building the future start preparing for its collapse, it makes you wonder what they know that the rest of us don’t.
But our plan doesn’t have to be retreat. It can be responsibility. If the future is shaped by the data it ingests, then let’s give it a healthy diet. The question becomes not just how powerful AI will become, but what we shape it be.
For now, that part is still up to us.