Big Boubles, No Troubles!
The Study That Struck a Nerve
Last week, a headline swept through LinkedIn feeds and Slack channels like wildfire: "95% of corporate AI projects fail."
The source was MIT's "State of AI in Business 2025" report. Within days, it had become the most-shared piece of AI research this year. Executives forwarded it to their boards. Engineers posted it with knowing emojis.
But here's what's interesting: nobody actually read the study.
I did. And what I found wasn't groundbreaking research, and it was something far more revealing. The methodology consisted of self-selected survey respondents and casual interviews at industry conferences. No representative samples. No audited business outcomes. Success and failure were left entirely to individual interpretation.
Think about that for a moment. We're watching an entire industry react to a number that's essentially a collective feeling dressed up as data.
Why Bad Data Becomes Gospel Truth
The power of this study isn't in its rigor: it's in its timing.
When people see "95% failure rate," they don't think about sample sizes or methodology. They think: "Finally, someone said what I've been feeling." It's the corporate equivalent of a horoscope that seems eerily accurate because it describes something everyone experiences.
You've seen this pattern before, even if you don't realize it.
Remember the early days of ERP systems? In 2015, if you'd surveyed companies three months into their Oracle or SAP implementations, you'd have gotten similar numbers. Everyone was frustrated. Everything felt broken. The consultants were expensive, the timelines were blown, and nobody could find anything in the new system.
Yet Oracle and SAP tripled in value during that period.
The difference? Nobody called it a bubble. They called it enterprise software.
The Three Forces Creating Today's "Bubble"
But let's be clear: there is something unusual happening in AI right now. It's just not what most people think.
Force #1: The YC Effect
Y Combinator used to be selective. Really selective. They'd fund 90 to 120 companies per year, each one vetted, challenged, refined.
Then something changed.
In the last 12 months, their investment volume has nearly tripled. They're not investing in companies anymore. They're investing in possibilities. A model, a demo, maybe a clever prompt. They write $500,000 checks to ideas that wouldn't have gotten a coffee meeting two years ago.
Here's what happens next: These proto-companies need to look legitimate. So they build beautiful websites (takes an afternoon with Framer). They add logos from friends' companies as "customers." They appear, from the outside, indistinguishable from real businesses.
You look at the landscape and see thousands of AI companies. But you're really seeing a mirage, a marketplace of maybes, funded on potential rather than product.
Force #2: The Velocity Problem
There's a programmer I know who built an entire SaaS product in a weekend. Not a prototype; a working product with payments, user authentication, the works.
Five years ago, that would have taken him three months.
Tools like Cursor and Lovable haven't just made coding faster; they've made starting a company feel trivial. You can go from idea to launched product before your coffee gets cold. And that's exactly the problem.
When building becomes this easy, you stop asking if you should build. You just build. The market floods with products that exist not because anyone needs them, but because someone could make them.
It's like giving everyone a printing press and wondering why there's so much bad poetry.
Force #3: The Paradox of Perfect Choice
Here's a scene that plays out in conference rooms every day:
A team evaluates AI tools. Product A has amazing analytics but weak integrations. Product B integrates with everything but lacks customization. Product C is customizable but expensive. Product D is affordable but missing that one critical feature from Product A.
Six months later, they're using none of them effectively.
Psychologists call this choice overload, but there's something deeper happening. When you can always imagine a better combination of features, when perfection seems just one vendor switch away, you never fully commit to making any single tool work.
You don't fail because the tools are bad. You fail because you're comparing reality to an imaginary perfect solution assembled from the best parts of everything you've seen.
The Bubble That Pops Differently
So yes, something will pop. But it won't look like 2001.
During the dot-com crash, your neighbor lost money. Your parents' retirement took a hit. The guy who delivered your pizza had WorldCom stock.
Today's AI bubble is a private affair. The money at risk belongs to venture capitalists and accredited investors. People who can afford to lose nine bets if the tenth pays off. The public markets barely register these companies because most will never get there.
When this bubble deflates, it won't be a crash. It'll be a cleanup.
The companies building fast and cheap without talking to customers? Gone. The ones chasing venture funding instead of product-market fit? Absorbed or abandoned. The tools that exist because they can, not because they should? Forgotten.
But here's what survives: Companies building slowly, thoughtfully, with their customers' actual problems in mind. The boring ones. The ones that take six months to ship a feature because they're making sure it actually works.
They're not trying to be everything to everyone. They're trying to be something specific to someone specific. And that's exactly why they'll still be here when the bubble-talk fades.
The Bottom Line
The MIT study got one thing right: A lot of AI projects are failing. But not because AI doesn't work, because we're terrible at technology adoption, and we always have been.
We're comparing the messiness of current reality to the promise of future possibility, forgetting that every transformative technology looked like failure in its awkward adolescence.
The real bubble isn't in AI. It's in our expectations.
And perhaps that's the bubble that needs to pop.