Delivery – not adoption – is the key to government unlocking value from AI

AI offers a way for civil servants to deliver more, faster, cheaper, and with greater precision. But to do so, Becky Noble, Public Services AI Lead, and Rory Scott, Public Services Head of AI Engineering at PA Consulting explain that success relies on how teams deliver for scale and value

AI is already saving civil servants nearly two working weeks each year. But time saved on individual tasks is not the same as value delivered at scale, and the government remains a long way short of its ambition to unlock over £45bn per year through full digitisation, productivity and service improvements.

Too often, AI is treated as an adoption challenge. The harder task is turning new capability into safer, faster, better delivery, and doing so in a way that stands up to scrutiny. That is not a model or tooling problem, but a delivery one, where we must re-imagine the way government services are delivered in an AI world.  

Based on our work implementing AI in complex public-sector settings, three changes matter most.

1. Elevate AI ownership

If AI is owned as a technical experiment, it will stay a technical experiment. To deliver value, it needs to be owned by the people accountable for performance - SROs, Permanent Secretaries, and senior operational leaders – with the same discipline you would apply to a major service or programme.

This is more than ‘digital leadership’ in the abstract. It is about real decision rights. AI changes workflows, roles, and controls. It introduces trade-offs between speed, accuracy, cost, fairness, and security. Those choices cannot sit solely with IT. The AI Playbook for the UK Government is explicit here: it advises departments to define responsibilities and liability across the AI lifecycle, and to nominate a Senior Responsible Owner accountable for the use of AI in a specific project.

Many organisations begin with a long list of AI use cases and end up diffusing talent, data access, and governance efforts. Senior ownership helps drive with prioritisation; picking one or two high-value decisions or processes, redesigning the workflow, and proving impact in live conditions.

With sustained demand and backlogs, HM Courts and Tribunals Service (HMCTS) recognised the need to identify, test, and scale innovative technologies, including AI, to continue delivering high-quality public services. What mattered most was leadership sponsorship: creating a clear mandate for where AI could be used; setting boundaries on responsible use and judicial decision‑making; and making decisions quickly when teams needed clarity on risk, data access, and controls.

2. Seek responsive assurance

Many government delivery models still rely on staged assurance: gateway-style reviews, static artefacts, and decision cycles based on stability. That approach is at odds with AI-enabled services, where inputs change, prompts and tools evolve, and user behaviour adapts.

AI also exposes a scale challenge. Even where individual teams or pilots are more efficient, the end-to-end system often isn’t. Hand-offs between policy, operations, digital, security, and governance creates friction: decisions get revisited, evidence gets reworked, and the same data gets inspected repeatedly through different forums. When delivery speeds up, that drag becomes more visible and more costly, and can quickly erase the gains AI creates at task level.

The Infrastructure and Projects Authority’s guidance on assuring agile delivery, which puts emphasis on observation, engagement, and indicators of success rather than document-heavy reporting, serves as a template for adapting assurance to fast-moving delivery. AI needs a similar evolution towards continuous, responsive, evidence-led assurance.

What does this look like in practice? Tracking outcomes and risks in live operation, not only at sign-off; proportionate levels of assurance based on the risk of a particular use case (e.g. lower thresholds for an AI chatbot across staff guidance documents vs as a service for vulnerable users); and transparent records that make it clear where AI is used, why, and with what safeguards.

Responsive assurance does not remove human accountability. It strengthens it by giving leaders timely evidence about what is happening, where controls are working, and where intervention is needed. It also makes it easier to earn trust from frontline users, audit committees, regulators, and the public.

3. Run AI as an operational capability

Assurance is about confidence and control during development: making risk visible, setting the right level of scrutiny, and keeping decision-makers informed. Running AI, meanwhile, is about keeping performance on track over time.  

AI brings a shift from deterministic, predictable programmes with a start and end point, to ongoing opportunities to iterate and improve. That means moving away from thinking of AI as a programme with an end point, and towards a mindset of continuous improvement - a ‘perpetual beta’ approach.

In government terms, this means recognising that AI‑enabled services, like frontline operations, require ongoing oversight and optimisation to remain effective, safe, and aligned with policy intent. This means setting clear objectives for what each AI-powered solution is meant to achieve; measuring its performance in live operation; checking that it remains aligned with organisational goals, values, and risk appetite; and acting decisively when it does not. This action might include retraining or tuning models, redesigning workflows, tightening controls, or withdrawal from use. The point is not to chase innovation; it is to treat AI as a real operational component, with ownership, maintenance, and intervention as part of normal service management.

This type of AI capability doesn’t just improve outward impacts, but internal capability too: identifying skills gaps, redesigning roles, shaping learning pathways, and linking capability investment to measurable outcomes.

Converting adoption into outcomes

Government has already shown that AI can save time in day-to-day work. The question now is whether departments can optimise delivery to gain greater value from AI.

That requires three deliberate shifts: put AI under senior ownership in the delivery line; move from staged oversight to responsive assurance; and build the delivery muscle to manage AI in order to convert capability into value. Done well, AI can then boost delivery confidence, quality, safety, and citizen experience.

Share this page