Back to blog
January 31, 2026·Poyan Karimi

The AI Consulting Industry Is Broken — Here's Why

TL;DR

The AI consulting industry has a delivery problem. It's very good at producing strategies, frameworks, maturity assessments, and roadmaps. It's much less good at producing organizations where people actually work differently because of AI. The gap between what gets sold and what gets delivered is large — and it's structural, not accidental. This article explains why most AI consulting engagements don't produce lasting change, what the incentive failures are, and what a different model looks like.

What's Wrong with the AI Consulting Industry?

The AI consulting industry is optimized for selling engagements, not for producing adoption. Those two things are not the same — and the gap between them is where most AI investment disappears.

The standard AI consulting engagement looks like this: a discovery phase, a maturity assessment, a strategy document, a roadmap, a series of workshops, a final presentation. The deliverables are tangible — documents, frameworks, slide decks. The client can point to them. The consultant can invoice for them.

What's harder to point to: whether anyone in the client organization works differently as a result. Whether the sales team is saving time on proposals. Whether the operations team has automated any of their reporting. Whether, six months after the engagement ended, AI is part of how work gets done — or a section of a strategy document that nobody opens.

The honest assessment of most AI consulting engagements: the strategy is solid. The adoption is near zero.

Why Does This Happen?

Three structural problems explain why AI consulting consistently produces deliverables rather than results.

Problem 1: Consulting is paid for outputs, not outcomes.

Consulting firms bill for time and deliverables. A 12-week engagement produces a strategy document and gets invoiced accordingly. Whether that strategy produces any change in the client organization is not part of the contract — and often not measurable within the engagement timeline.

This isn't unique to AI consulting. It's a structural feature of professional services. But it's particularly damaging in AI adoption work, where the deliverable (strategy) and the outcome (behavior change) are far apart. A company can have an excellent AI strategy and zero AI adoption. The consultant gets paid either way.

Problem 2: The people who design the engagement aren't the people who need to change.

AI strategy consultants typically work with senior leadership — the people with authority to approve the engagement and sign the invoices. The strategy is designed at that level, for that audience.

The people who actually need to change their behavior are the 30, 50, or 200 employees doing daily work. They weren't in the room when the strategy was designed. They experience it as a mandate from above. The implementation plan assumes they'll comply because leadership said so.

They often don't. Not because they're resistant to change, but because a strategy designed without them, for a problem they weren't asked about, in language that doesn't connect to their daily work, doesn't give them what they need to actually change.

Problem 3: Strategy is easier to sell than behavior change.

Producing a strategy document is a well-defined, manageable consulting activity. It has a clear scope, a deliverable, and an endpoint. Producing genuine behavior change in an organization is messier, slower, more dependent on things outside the consultant's control, and harder to package into a proposal.

So consultants sell what they can reliably deliver: strategy. And clients buy what sounds most responsible: a comprehensive plan before doing anything. Both sides are acting rationally within their incentives. The result is a lot of excellent strategy and very little change.

What Does This Look Like in Practice?

A typical scenario: a company spends £50,000-£200,000 on an AI strategy engagement. Twelve weeks later, they have a detailed roadmap. Twelve months later, they're trying to figure out why nothing has changed.

The roadmap recommended a phased approach: pilot with one team, measure results, expand. The pilot team was identified, an AI tool was selected, a kickoff meeting was held. Then the consultant engagement ended. The internal champion who was supposed to drive implementation had three other priorities. The pilot team used the tool occasionally for a month, then reverted to old habits. The expansion phase never happened.

This is not a failure of the strategy. It's a failure of the model — one that treats strategy and implementation as separable, and assumes that good strategy produces its own implementation.

It doesn't. Especially not for AI, where the barrier isn't understanding what to do but changing how 50 people actually work every day.

What Does a Model That Actually Works Look Like?

An AI deployment model that produces lasting adoption has three characteristics: it starts with doing rather than planning, it reaches every employee rather than just leadership, and it includes support through the adoption curve rather than ending at delivery.

Start with doing.

The most useful thing you can do in week one of an AI initiative is get people building. Not planning to build. Not designing the architecture for building. Building.

A team that has spent one day building real AI tools for their actual roles has more useful information — about what works, what doesn't, where the friction is, what generates the most excitement — than a strategy document produced after twelve weeks of interviews and analysis.

Strategy follows doing. You learn what your organization's AI strategy should be by watching your organization use AI. The organizations that get this right invert the traditional consulting sequence.

Reach every employee.

AI adoption is not a leadership project. It's a behavior change project for the entire organization. The strategy meetings, the governance frameworks, the steering committees — these are supporting infrastructure. They don't change how a recruiter screens candidates or how a sales rep writes proposals.

What changes those behaviors is a session where the recruiter and the sales rep build tools for their specific roles, with someone who knows what good looks like for those roles. That requires going much deeper into the organization than most consulting engagements are designed for.

Include support through the adoption curve.

The adoption curve for any new behavior has a predictable shape: initial enthusiasm, followed by friction as the novelty fades and old habits reassert, followed by either genuine habit formation or abandonment. The critical period is weeks two through six. That's when most people decide whether to continue or revert.

A consulting model that ends at the strategy delivery — or even at the initial training — leaves clients unsupported precisely when support matters most. The model that works is one where the consultant is present through the adoption curve, not just at the strategy phase.

Why We Built Deployed Differently

We built Deployed because we experienced the broken model ourselves — and because when we went looking for an alternative, we couldn't find one.

Before working with external clients, we deployed AI across three companies we own: a recruitment and employer branding agency, a SaaS company, and a tech retail group. We tried the strategy-first approach. We ran general training sessions. We watched adoption die at the 30-day mark.

What worked was different: starting with building, not planning. Going role-specific, not general. Staying present through the adoption curve, not disappearing after the workshop.

That's the model we run for clients. Not a strategy document. A hands-on session where every employee builds something for their role, followed by structured support until AI use is genuinely habitual.

It's a harder model to package into a proposal. It's also the only model we've found that actually produces what clients are paying for: an organization that works differently.

The Deployed Kickstart starts with building, not strategy. The Partner program keeps us present through the adoption curve.

FAQ

Why does most AI consulting fail to produce results? Because AI consulting is typically optimized for deliverables — strategies, frameworks, roadmaps — rather than behavior change. The incentive structure of professional services rewards producing documents, not changing how employees work. The gap between a good AI strategy and actual AI adoption is large, and most consulting engagements don't bridge it.

What's wrong with AI strategy consulting? Three structural problems: consultants are paid for deliverables not outcomes, the engagement is designed at leadership level rather than for the employees who need to change, and strategy is easier to sell and deliver than behavior change. The result is excellent strategies and minimal adoption.

What does effective AI consulting look like? Effective AI deployment starts with doing rather than planning, reaches every employee rather than just leadership, and includes support through the adoption curve rather than ending at the strategy delivery. The benchmark is whether employees work differently after the engagement — not whether a strategy document exists.

How much do companies spend on AI consulting without seeing results? Most mid-size AI strategy engagements run £50,000-£200,000. The proportion of that investment that produces lasting behavior change — measurable in how employees actually work — is typically very small. The spend is real. The adoption is often near zero.

Is AI strategy consulting ever worthwhile? Yes — in specific contexts. For large organizations making significant infrastructure investments, strategic clarity before committing is valuable. For companies deciding whether and how to integrate AI into their product, strategic work is necessary. The failure mode is applying a strategic consulting model to a behavior change problem. Those require different approaches.

What should you ask an AI consultant before hiring them? Ask: what will every employee have built by the end of the engagement? What does your support structure look like in the 60 days after the initial sessions? How do you measure behavior change rather than training completion? If they can't answer these specifically, the engagement is likely to produce a strategy document rather than adoption.