The government’s AI Opportunities Action Plan outlined an ambitious strategy to boost economic growth and improve people’s everyday lives through AI.
The plan emphasised improving data capabilities, developing AI talent, reforming regulation, and fostering AI adoption across both public and private sectors.
In a recent update, the government says it has moved from “ambition to delivery” but added that there is still more to do.
And that’s what we’ve observed from the conversations we’ve had with civil servants.
Departments want solutions that will automate and optimise workflows, make processes more efficient and reduce operational costs. And in the spending review, all departments signalled the big role AI will play in helping achieve those goals.
The government estimates that digitisation of services could achieve up to £45bn in savings and productivity benefits annually across the public sector, with 80 per cent of those projected savings coming from process simplification and AI-driven automation of manual tasks.
But there are still barriers to adoption.
Why AI stalls in government
Our recent white paper draws on original research, including interviews with civil servants in a range of roles, and highlights that the main obstacles to AI adoption are cultural and systemic, not technical.
Here's what we found.
The AI skills gap
Despite strategic efforts to drive innovation at the leadership level, many operational teams lack practical knowledge of how to use AI effectively. We identified a gap in understanding around AI frameworks: what they entail, what they should include, and how they can support ethical and effective AI adoption. This lack of clarity hinders teams from progressing beyond experimentation to scaled delivery.
Shaky foundations and bad data
Clean, high-quality data is the bedrock of any successful AI implementation, but many departments don't have it. Information is stuck in legacy systems or rife with manual errors, making it inconsistent and unshareable. Relying on this sort of data means AI is less effective and fuels scepticism when results prove unreliable.
A lack of strategy
Civil servants expressed uncertainty about what AI tools are allowed and how to use them appropriately. The lack of a clear policy and strategic direction creates a risk-averse environment, making teams hesitant to proceed. This indicates a need for clearly defined boundaries and a purpose-driven deployment of AI.
The culture issue
The biggest barrier to AI adoption is how people work. In sectors such as defence and national security, decades-old systems and workflows are combined with fixed staffing models and tight security requirements. Deploying AI in this environment requires cultural change and challenging long-standing habits and norms.
Not enough resources
A recurring theme was the need to evaluate organisational readiness, particularly in relation to resourcing, expertise, and internal capability. It highlights a significant gap in specialist skills and a lack of planned resource allocation for AI initiatives.
Safety and trust
A critical user need across all interviews was confidence in how data is handled and protected. Civil servants require assurance that AI systems are secure, compliant, and ethical, especially when dealing with sensitive or personal data.
Seven principles to AI integration
From our work with government organisations, we’ve identified a seven-principle framework and an AI readiness assessment that addresses the concerns departments have about successfully integrating AI.
We highlight why it’s important to align stakeholders, identify user needs, lay a solid data foundation, find targeted use cases, ensure robust assurance and ethics, and build a strategic roadmap to scale AI capabilities.
We give an example of how our approach has already delivered tangible results for Border Force. In just 12 weeks, we built a proof-of-concept that demonstrated how AI-powered vision models can detect contraband or threats in X-ray images.
We’re now working with Border Force to shape the implementation roadmap and guide through the governance, security, and ethical considerations necessary for operational rollout.
The government’s AI Opportunities Action Plan sets a clear vision. But as our research shows, turning that ambition into real-world results requires addressing the barriers civil servants face and feel.
What teams need most is confidence. Confidence that comes from following proven approaches. Confidence that comes from tapping into the right expertise. And confidence that comes from having a clear roadmap.
With these in place, teams can take AI initiatives from pilots to full solutions that deliver on their organisational outcomes and value for citizens.
Our human-centred approach builds that confidence, trust and skills while laying the foundations for AI solutions that are secure, legal, ethical and scalable.
Read our white paper, An AI readiness roadmap for decision makers, to find out more.