AI HYPE CYCLE

Behind the Headlines of the MIT Study

Published: 2025-08-27

The response to MIT’s study showing "95% of Enterprise AI projects are failing" has been one of gleeful schadenfreud for people like me who love soapboxing about how most technology headlines are fake, fraud, or grossly exaggerated.

Maybe we missed on the early NVIDIA/PLTR buy, or are jealous of our friends that went to work there in 2018, and so a little headline course-correction soothes those wounds of regret.

And while I’d prefer to join the collective flagellation against hype, unfortunately the study‘s focus on "custom enterprise AI build outs" showed me the biggest failure exposed by this study might be the lack of understanding of just how low the survival odds of enterprise IT projects are at large organizations in general, AI or not.

In 1994 for instance, surrounded by the buzz of early intranet (and internet) and the dawn of client-server migrations, a measly 9% of ALL large Enterprise IT projects were deemed successful when judged by similar metrics.

Three decades later? In 2020, that 91% failure rate dropped to a healthy…90% for large enterprises.

Was the hype of "groupware" in 1994 or IoT implementations of 2020 partly to blame for some of that failure? Sure. But less because of the nature of new technology itself and more so because of how we tend to behave in its presence.

The constant across incorporating any new technology into an enterprise is the hazardous terrain found when attempting to translate real, solvable business problems into specs, PRDs, and process maps, and then further into solutions with clear roll-out plans, end-user behavior change and adoption.

Simply put, any IT project of significance at ANY enterprise of decent scale is far more fraught than we like to think. (Which is why the few software vendors that do repeatably figure out the above pitfalls with productized solutions and predictable experience are able to propagate so quickly.)

In 2025, matters are made worse when these hazards are combined with the reality of dealing with non-deterministic language models, exaggerated claims of said models’ performance and top-down project mandates that you should be able to "vibe-code" these successful projects with half the IT resourcing than before. The result, naturally, is a 5% lower success rate.

Or, my smooth-brain explanation: that the real new hazard is people writing all those PRDs and specs with ChatGPT. Turning their brains off, spreading gobbledygook and muddy thinking throughout their organizations and onto doomed "Enterprise AI projects."

But to get back on my soapbox for a minute…

The report did find that vendor-led partnerships were a third more likely to deploy than internally built tools. What’s really holding these ‘AI’ projects back is that these tools don't learn or integrate well into workflows, but when they are actually targeted at specific processes and based on specific outcomes, they can deliver real value.