InnerWorks Logo
Return to main siteReturn to main site
Blogs

AI readiness for enterprise: why initiatives stall and what the foundations need

AI readiness for enterprise: why initiatives stall and what the foundations need
Adam Shallcross

Adam Shallcross • Founder and CEO

from Cogworks • 29 Apr 2026 • 1 min read

Most people talking about enterprise AI are either deeply technical or deeply commercial. Adam Shallcross is both. As CEO and Founder of Cogworks and someone who has spent the last 18 months building production AI agents, he understands what the technology can do and what it can cost organisations.

Something has shifted, and the organisations making progress spotted it early

A year ago, the conversation I was having with technology leaders was "should we be investing in AI?" That question has gone. The investment has been made. The roadmap exists. In a growing number of organisations, AI has moved off the IT agenda entirely and landed on the CEO's desk, or the Chief of Staff's. Chief AI Officer roles that didn't exist eighteen months ago now exist. The board is asking for specifics.

The question now is different: are the foundations actually in place to deliver what's been committed to?

And the organisations I see making real progress on AI aren't the ones moving fastest. They're the ones who were honest about where they actually were before they committed to where they wanted to go.

I've spent the last 18 months building production AI systems

Not proofs of concept, actual systems doing real work.

What I keep seeing, in my own builds and in the organisations I work with, is that AI readiness comes down to three things. 

Adamaiagentslinkedin2026

Adam recently wrote about building a production system of 7 AI agents. Some of the experience that shapes the thinking in this article

Why enterprise AI initiatives stall, and what the data actually says

When an AI initiative stalls, the diagnosis usually offered is one of three things: the wrong technology, the wrong vendor, or not enough budget.

In my experience, it's almost never any of those.

McKinsey's 2025 AI research found that organisations reporting significant financial returns from AI are twice as likely to have redesigned their workflows before selecting the technology. Not twice as likely to have picked a better model. Twice as likely to have figured out how the work should actually flow before they started plugging AI into it.

I've seen this pattern across 20 years of technology waves. CRMs that failed because nobody redesigned the sales process first. Platforms that editors hated because the content workflow was never thought through. Automation tools that made things faster and made the mistakes faster too.

The pattern is always the same. Someone gets excited about a technology, a budget gets allocated, a tool gets implemented, and six months later everyone's wondering why it didn't deliver what the demo promised. Not because the tech doesn't work. Because the foundations weren't assessed before the commitment was made.

AI is following the same pattern. The difference is that the expectations are bigger, and the costs of getting it wrong are higher.

The three things that decide whether an AI initiative actually delivers

1. Strategy clarity

Not "do you have a roadmap?" but is the specific problem you're solving defined precisely enough for a model to address it reliably?

Two things sit under this in practice.

The first is whether the business case is clear, the use case is specific, and the expected outcome is actually measurable. I've watched organisations move to implementation with a problem statement so broad that no AI system could meaningfully help with it. The roadmap existed. The problem didn't have edges.

The second is whether you understand how this initiative sits within your competitive landscape, your regulatory environment, and your customers' actual expectations. Gallup research found that only 15% of employees say their organisation has communicated a clear AI strategy. That means 85% of teams are building on assumptions rather than a defined direction.

The question worth sitting with: are we solving a named problem, or hoping AI surfaces one?


2. Data maturity

The question isn't whether you have data. Every organisation has data.

The question is whether it's structured, accessible, governed, and consistent enough across your systems and business units for a model to reliably work with it.

In my experience building agent systems, data is where the gap almost always lives. And it's the least glamorous problem to surface in a board presentation, which is exactly why it keeps getting pushed to later.

KPMG research identifies data quality, accessibility, and governance as the number one blocker to AI success. Gartner predicts that through 2026, organisations will abandon 60% of AI projects unsupported by AI-ready data.

Later, in other words, is always more expensive.

The question worth sitting with: is our data structured and accessible enough to feed a model reliably, not in one business unit, but across all the systems that would need to feed this initiative?


3. Delivery capability

This is different from engineering capability, and the distinction matters.

Delivery capability is whether your team has the specific skills, processes, and bandwidth to build, test, iterate, and maintain AI systems over time. Not just to launch them. AI systems require a different feedback loop than traditional software. The failure modes are different. The monitoring requirements are different.

Governance is part of this, and it's the part most organisations are leaving until later. Deloitte's State of AI in the Enterprise 2026 report found that 74% of organisations expect to be using agentic AI within the next two years, but only 21% have a mature governance model for autonomous agents. That is not a technology gap. It is a readiness gap.

AI systems also don't fail like traditional software. They drift. The Cloud Security Alliance calls it cognitive degradation: models get updated, prompts change, dependencies shift, and collectively the system's behaviour moves in ways that standard monitoring completely misses. I've experienced this firsthand. Something works well, you leave it running, and a few weeks later, the outputs are subtly different. Not wrong enough to trigger an alert, but not right either.

The question worth sitting with: does our delivery team have what they need (not just to build and launch), but to monitor, iterate, and maintain an AI system over time, including the governance structures that agentic AI specifically requires?

What to do with this

Most organisations are genuinely strong in one of these three areas. The gaps in the other two are where the expensive lessons happen.

A proper AI readiness assessment, done before the deadline is running and not during it, is the thing that separates the AI initiatives that deliver from the ones that get quietly shelved after the pilot.

Before answering the board's delivery question, it's worth being honest about where you actually stand across all three. If you want a structured way to do that assessment, Cogworks has built a five-minute diagnostic that maps exactly to these three areas and gives you a clear picture of where your biggest gap lies. Take the AI Readiness Diagnostic →

Alternatively, my diary is open for a chat on where to go next: Book a chat

This article contains links to Cogworks pages, products or services

For every article containing links back to Cogworks’ website, we donate £50 to our amazing partner, Community Tech Aid, and any successful partnership will result in another £100 donation to help fund purchases to repair and refurbish donated technology.

Community tech aid

Innerworks and Cogworks are proud to partner with Community TechAid who aim to enable sustainable access to technology and skills needed to ensure digital inclusion for all. Any support you can give is hugely appreciated.