top of page

AI's Neutrino Burst

It wouldn’t be the first time. The late 1990s were filled with breathless predictions about the internet—how it would rewire human communication and usher in a new era of growth. Yes, it did those things. But first, the dot-com bubble had to burst, wiping out trillions in market value and tipping the economy into a recession. Today, we’re hearing the same chorus. But if the dot-com era was a bubble, the hype and expectation surrounding AI is like a white-hot supernova by comparison.


OpenAI is reportedly valued at $300 billion. Google, Microsoft, Meta (combined value $7.16 trillion), and a swarm of startups are racing to define the “next big thing,” pouring billions into LLMs. It’s a new industrial revolution, they say. The dawn of Artificial General Intelligence (AGI). Maybe even superintelligence.


What if they’re wrong? What if it’s just an elaborate illusion? A modern Zoetrope spinning fast enough to suggest movement where there really is none?


The Promise and the Potential

The generative AI boom is fueled by LLMs trained on vast amounts of internet data. They produce text that reads fluidly and images that dazzle at a glance. But beneath the surface lies a troubling inconsistency: the product rarely lives up to its potential.


Anyone who’s spent hours tweaking an AI-generated image prompt knows the frustration. You ask for a woman holding a violin and you get five fingers too many. You ask for a building in moonlight and get a dreamlike smear of architectural metaphors. The results can often be uncanny—but rarely useful without significant intervention.


The same is true for AI-generated text. While it sounds polished, it frequently collapses under scrutiny. The phenomenon known as “hallucination” causes these systems to produce confident falsehoods, often cloaked in the authority of clean grammar and a professional tone.


In Mata v. Avianca, Inc. (S.D.N.Y. 2023), two lawyers were sanctioned for submitting a legal brief that cited six non-existent court cases—entirely fabricated by ChatGPT. The attorneys had failed to verify the citations, assuming the AI's confident tone equated to veracity. It didn’t. The federal judge called the incident “unprecedented,” a cautionary tale about the blind trust placed in generative systems.


Since then, there have been numerous other reported cases of AI hallucinations in legal filings. A new database, created by French lawyer and data scientist Damien Charlotin in early May 2025, reportedly tracks over 120 such instances going back to June 2023.


The Chicago Sun-Times (and subsequently the Philadelphia Inquirer) published a "Summer Reading List for 2025" in its "Best of Summer" special section. The list contained 15 book recommendations, but 10 of them were entirely fabricated by AI system. While the authors attributed to these non-existent books were real and well-known (e.g., Isabel Allende, Andy Weir, Brit Bennett, Taylor Jenkins Reid), the books themselves did not exist, and their descriptions were also AI-generated.


These aren’t isolated glitches; they’re signs of a deeper limitation in the technology itself: it doesn’t understand, it predicts. It’s not intelligent, it’s dutiful—like a puppy performing tricks for its master. We hope for obedience but will accept amusement.


Gary Marcus, a cognitive scientist and prominent AI skeptic, warns that the current boom in generative AI may be more spectacle than substance. In a recent interview, Marcus argued that the industry’s narrow focus on large language models (LLMs) is not only overhyped but potentially harmful, crowding out more promising approaches to artificial intelligence. Marcus champions neurosymbolic AI as a better alternative, one that could more faithfully mirror human reasoning. He warns that today’s AI frenzy risks prioritizing marketability over durability.


"I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

 

Déjà Vu, But Louder

It’s worth remembering that in the dot-com era, bad ideas didn’t look like bad ideas—until they failed. Companies like Webvan raised hundreds of millions promising to change the way we got our food. Pets.com spent its way into our collective mindshare before fading into financial infamy.


The size and speed of today’s AI investments make those stories look quaint. The stakes are far higher now, not only in capital but in social impact. This isn’t just about tools—it’s about rewriting how we think, work, and govern.


And yet, when the most celebrated uses of generative AI include rephrasing emails and generating deepfakes, it’s fair to ask whether the emperor has any clothes. Many of the most hyped applications (legal, graphic arts, coding, content creation) solve problems that arguably didn’t exist at scale. Others introduce new risks—legal, ethical, and social—that we’re only beginning to confront.


A Crash with Consequences

If the generative AI bubble bursts, it won’t just be a financial correction. It could cascade into broader economic frailty. There’s a global race for AI dominance with governments now investing state resources to back private models. If the technology proves less transformative than promised—or if the infrastructure propping it up proves unsustainable—the fallout could reach far beyond Silicon Valley.


Because unlike the dot-com bust, this bubble is forming during a moment of inflation, social tension, and ongoing geopolitical instability. If AI is more parlor trick than paradigm shift, the crash could be directly proportionate to the scale of the fantasy we’ve built around it.


The people investing in generative AI are not doing it for fun. They’re doing it with expectations. Returns will be demanded—soon, and in full. Should the product underdeliver, what will they sell instead?


User data? Behavior profiles? Private inputs that were never meant to be commodities?


We’re told that AI will transform everything. But before you believe the hype, ask yourself this: When was the last time you could fully trust the judgment of someone whose (ballooning!) paycheck depends on selling you the future? Because if this isn’t a revolution—but a reckoning—then we’re not staring into the future; we’re standing in the blast radius of our own projections.

 

 

Recent Posts

See All
When Billionaires Build Bunkers

Paul Tudor Jones is not an alarmist. He’s a billionaire hedge fund manager that says AI presents an immediate threat to human safety.

 
 
bottom of page