Your AI isn’t Failing, Your Foundation is: Why 95% of AI Initiatives Delivers Zero ROI

Table of Contents

Somewhere right now, a CFO is sitting in a board meeting trying to explain why the AI initiative that was supposed to transform planning has produced exactly nothing. No faster forecasts. No better scenario models. No measurable return on the seven-figure investment the board approved eighteen months ago. 

They’re not alone. MIT’s Project NANDA studied hundreds of enterprise AI deployments and found that 95% of generative AI pilots delivered no measurable P&L impact. Not underperformed. Not “needs more time.” Zero. And this isn’t a small-sample problem either, because the research covered $30 to $40 billion in global enterprise AI investment. The vast majority of that money evaporated. 

Here’s what makes this uncomfortable: the technology actually works. AI can forecast demand patterns, detect anomalies in financial data, generate scenario models, and surface insights that would take a human analyst weeks to produce. The algorithms were never the problem. The data foundation underneath them was. 

The Pattern Behind Every Failed AI Pilot

Look closely at the AI initiatives that collapse, and you’ll find the same story repeating across industries. 

McDonald’s spent three years testing an AI-powered drive-thru ordering system with IBM. The system misheard customers, added random items to orders, and produced bizarre food combinations that went viral on TikTok. They pulled the plug in 2024, not because the AI was incapable, but because it couldn’t make sense of messy, unstructured real-world inputs that nobody had prepared it for. 

S&P Global data shows that 42% of companies scrapped most of their AI initiatives in 2025, more than double the 17% that did so the year before. RAND Corporation puts the broader AI project failure rate above 80%, which is double the failure rate of non-AI IT projects. And MIT’s research found something particularly telling: more than half of enterprise AI budgets are going to sales and marketing pilots, while the biggest actual ROI sits in back-office automation and operations, the areas where structured data and defined workflows already exist. 

That last point is the key to understanding all of this. AI succeeds where the data is clean, connected, and structured. It fails where those conditions don’t exist. And for most organizations, particularly in finance and planning, the data isn’t even close to ready. 

You're Asking AI to Build on Quicksand

Think about what a typical enterprise planning environment looks like before AI enters the picture. Finance has its budget models in Excel. Operations runs a separate demand planning tool. HR does workforce projections in yet another system. Sales has its own forecasts that don’t reconcile with anyone else’s numbers. The data sits in silos, defined differently across departments, updated at different intervals, and reconciled manually if it gets reconciled at all. 

Now drop AI into that environment and ask it to generate a rolling forecast. What does it have to work with? Fragmented data with inconsistent definitions, no single source of truth, and no mechanism for feeding human corrections back into the model. The AI doesn’t hallucinate because it’s poorly designed. It hallucinates because you’ve given it garbage to work with and expected gold in return. 

This is exactly what MIT’s research confirms. The organizations that succeed with AI aren’t the ones with the fanciest algorithms. They’re the ones that prepared their data infrastructure first. External vendor partnerships succeed about 67% of the time, while internal builds succeed only 33%, largely because specialized partners force the foundational work that internal teams skip in their rush to deploy. 

What "AI Ready" Actually Looks Like

Getting value from your AI investment isn’t about buying better AI. It’s about making your organization ready to use AI effectively, which means doing the work most companies want to skip. 

Structured, modeled data 

Your planning data needs a common structure. Not just a data warehouse where everything gets dumped, but an actual data model where revenue means the same thing to finance as it does to sales, where cost centers align across departments, and where actuals flow into the same structure as your plans. Without this, even the most sophisticated AI tools will produce outputs nobody trusts. 

All sources consolidated in one place 

When your ERP, CRM, HR system, and operational tools each hold a piece of the planning picture, AI can’t see the full image. Consolidation isn’t a nice-to-have. It’s the prerequisite for any AI initiative that’s expected to deliver cross-functional insight. 

Connected data entry that replaces isolated spreadsheets 

As long as critical planning inputs live in disconnected Excel files on someone’s desktop, no AI model can access, learn from, or improve on that data. The entry point has to be connected to the broader data ecosystem, not floating in isolation. 

A feedback mechanism that lets AI actually learn 

This might be the most overlooked requirement. AI models improve when humans correct them, but most planning environments are one-directional: data flows out for reporting, and nothing flows back. Without write-back capabilities that create a genuine feedback loop, the model never learns from human expertise. It just keeps repeating the same mistakes with increasing confidence. 

Getting More From What You've Already Invested

Here’s the thing that should bother every CIO and CFO reading this: you’ve probably already spent real money on AI. Licenses, pilots, proof-of-concept projects. The question isn’t whether to invest in AI, because that ship has sailed. The question is whether your current infrastructure lets that investment actually pay off. 

MIT’s research found that the biggest ROI from AI comes in back-office operations and finance, not in the flashy customer-facing pilots that eat most of the budget. That’s encouraging news for planning and performance management teams, because the use case with the highest return potential is sitting right in front of you. But only if the data architecture supports it. 

The organizations currently winning with AI in planning share a common trait: they treated data readiness as the first investment, not an afterthought. They consolidated their planning data before asking AI to analyze it. They built connected models before expecting AI to forecast from them. They created feedback loops before assuming AI would get things right on the first try. 

You don’t need to rip and replace everything. But you do need to be honest about whether your current environment gives AI what it needs to succeed. If your planning data lives in disconnected spreadsheets, if your departments define the same metrics differently, if there’s no mechanism for human corrections to flow back into your models, then no amount of additional AI spending will compensate for that gap. 

The companies in MIT’s successful 5% didn’t have better AI. They had better foundations, and that’s a problem with a clear, proven solution. 

Build the Foundation That Makes AI Deliver

Acterys is built to solve exactly the readiness problem that causes most AI investments to fail. It connects your data sources into a unified, structured model. It gives your teams the ability to plan, enter data, and collaborate directly within Power BI and Excel, eliminating the isolated spreadsheets that starve AI of reliable inputs. And its write-back architecture creates the bidirectional feedback loop that allows AI models to learn from your team’s expertise and get smarter with every planning cycle. 

Whether you’re running budgeting, forecasting, consolidation, or full xP&A across the enterprise, Acterys builds the data foundation first so your AI investment actually produces the returns you were promised. 

Request a demo to see how Acterys turns AI readiness into AI results.