Digging Into MIT’s 95% AI Project Failure Rate – How to Increase AI project ROI

AI project ROI has become the ultimate executive challenge, and MIT’s latest NANDA study reveals just how challenging AI project ROI has become. After enterprises have collectively spent $30-40 billion on GenAI initiatives, roughly 95% are showing no measurable impact on the P&L according to the MIT study. That’s a sobering statistic, though perhaps not entirely surprising to anyone watching this space closely.

This pattern isn’t new. The research reveals something I witnessed firsthand about five years ago whilst working with a client’s data science team that was struggling to make AI ‘work’. Brilliant technical work, but projects that were either too ambitious, had too many stars that needed to align, or that lost stakeholder support because of the time it took for them to create impact. The projects tackled impressive technical challenges but delivered disappointingly modest business returns.

What we discovered then, mirrors what MIT found now: the problem isn’t necessarily the technology, it’s quite possibly the selection and design process. Most organizations are picking fights they can’t win, or more precisely, fights that don’t matter even when they do win.

The MIT study exposes a telling disconnect. Whilst roughly half of AI investment flows into sales and marketing initiatives – the glamorous, visible stuff that looks good in quarterly presentations – the actual AI project ROI is coming from decidedly unglamorous back-office automation. Document processing, compliance workflows, operational efficiency gains. The sort of work that doesn’t photograph well but quietly saves hundreds of thousands in external spend.

Here’s what really struck me about the MIT findings: many employees are already running their own AI experiments, and most organizations are completely ignoring the results. Nearly 90% of knowledge workers are quietly using AI tools in their daily work – ChatGPT for research, Claude for writing, various apps for specific tasks. They’re developing practical knowledge about what actually saves time and what’s just clever-looking nonsense.

Yet when these same organizations design their formal AI strategy, they start from scratch. They commission consultants, form committees, and invest in enterprise solutions without ever asking their own people what’s already working or what has the largest potential for impact.

When we tackled that data science team’s portfolio challenge – determining which internal projects deserved investment – we built a simple evaluation framework using two dimensions: attractiveness versus likelihood of success. Nothing revolutionary there – consultancies have been using 2×2 matrices since the dawn of PowerPoint. The key was tuning the criteria to reflect what actually drives AI project ROI in that specific organization.

For attractiveness, we focused ruthlessly on measurable impact: hours saved, external costs avoided, regulatory risks mitigated, value generation, frequency of the task. No woolly metrics about “strategic positioning” or “future readiness.”

The likelihood dimension proved more nuanced. From the MIT findings, the critical factors appear to be integration capability – can this solution actually learn and adapt within existing workflows – can it remain relevant over time? Other factors we found to be important included clear definition of project scope, data requirements and scalability. Interestingly, the MIT study suggests that organizations partnering with external providers see roughly double the deployment success rate compared to purely internal builds. It seems that buying pays off far more than building in-house.

Perhaps most importantly, we discovered that internal stakeholder alignment isn’t just helpful, it’s decisive. The projects that succeeded had genuine line-of-business sponsors who understood both the problem and the solution, not IT leaders trying to find applications for cool technology. Crucially, these sponsors could articulate why a particular project deserved investment over competing priorities – even when it wasn’t their preferred project.

What emerges from both the MIT research and practical experience is that the organizations crossing from pilot to production aren’t necessarily more innovative or better funded. They’re more disciplined about project selection and more realistic about what drives adoption.

The unsexy truth is that the highest-impact AI initiatives often target the most mundane processes. The legal team drowning in contract reviews, the finance function buried in invoice processing, the operations centre managing supplier documentation. These aren’t the use cases that generate conference keynotes, but they’re the ones generating actual AI project ROI. And they are also the projects that staff have already started to use AI to help solve.

At Executive AI Partners, we spend considerable time helping leadership teams identify these opportunities and design them for AI project ROI success. The framework we developed recognises that successful AI initiatives aren’t necessarily exciting projects, but they are reliably valuable ones.

The window for competitive advantage through AI adoption is narrowing rapidly, but it hasn’t closed – perhaps it’s still 95% open. Rather than debating whether to invest in AI, smart leaders are focusing on where to invest first. Start with problems that matter, in workflows that work, with people who care about the outcome.

That 5% success rate isn’t a ceiling – it’s simply a reflection of current selection criteria. Choose better, and you’ll do better.

Blog Subscriber

RECLAIM 10+ HOURS EVERY WEEK

The Sunday AI Brief for Time-Starved Executives

Every Sunday at 6 AM: One AI workflow that Fortune 1000 executives use to save 2-3 hours the following week.

5-minute read, immediate implementation
Real use cases from $10M-$100M companies
No AI hype, no vendor pitches—just ROI

Join 500+ executives reclaiming their strategic time.
Unsubscribe anytime.


more insights