Building trustworthy AI-enabled services for the public sector

Now, more than ever, there is a need for a practical and accountable approach to AI that strengthens service delivery without unnecessary complications. Writes Nikki Powell, Director of Service Delivery at Capita Public Service

By Capita

09 Mar 2026

Across government, artificial intelligence is frequently positioned as a response to rising demand, workforce pressures and the shift in citizen expectations. Senior leaders understand the potential, yet trust remains the defining barrier. Recent public debate, including analysis of large language models in public sector settings, has highlighted a persistent concern. How can government be confident that AI systems, particularly those that generate content, will provide information that is safe, accurate and accountable?

This question goes to the centre of public service legitimacy. Leaders are right to be cautious, although caution does not mean standing still. AI can strengthen services, increase productivity and support better outcomes for citizens, provided it is introduced in ways that reflect the operational and regulatory realities of public service delivery.

Nikki Powell headshot
Nikki Powell, Director of Service Delivery, Capita Public Service

Rising demand and strained budgets

Demand continues to grow across public services while budgets remain tight. Independent analysis has shown that many departments face rising workloads with fewer staff available to absorb them. This pressure exists while citizens expect responsive digital interactions like those they experience in private sector services. AI is rapidly becoming part of the core fabric of service delivery.

However, the public sector faces real constraints. Data is often fragmented. Systems are ageing. Procurement processes are lengthy. Any model used to answer citizen questions must be 

dependable and explainable and must withstand scrutiny from auditors. Public attitudes underline the challenge. Research from national bodies indicates that almost half of the public are unsure whether AI will be used safely in government services. Concern around potential bias, transparency and error continues to shape expectations. Trust is now a form of currency in AI enabled public service delivery.

What is really changing and why it matters

The shift underway goes beyond the growth of large language models. It is about how public bodies use AI safely inside existing services with strong governance and human oversight. The debate often focuses on conversational tools, although these are only part of what is needed for trustworthy public service operations.

What matters now is the ability to combine multiple technologies, from machine learning and automation to secure data services and monitoring mechanisms, so that AI becomes dependable and auditable inside live environments. This is why structured enablement approaches are important. An approach such as the Capita AI Catalyst Stack blends different components. These include foundational data controls, workflow integration, safety guardrails, monitoring and human review. This allows organisations to govern, evidence and adapt AI use as operational needs change.

This integrated approach supports principles that are now essential.

  • Trust must be built inside operations. Confidence comes from using AI within services that already have clear ownership, established processes and defined accountability.
  • AI must align with how services work. Embedding capabilities inside existing workflows, controls and quality standards helps ensure consistency, explainability and defensibility.
  • Safe environments matter. Business process outsourcing delivered services provide a practical control environment where AI can be monitored, adjusted and strengthened.
  • Progress must be measured and reversible. Public bodies need approaches that allow them to test, refine or pause AI activity without destabilising services, a point reinforced by recent National Audit Office commentary on emerging technology governance.

By moving beyond a narrow focus on LLMs and combining the full set of capabilities required for safe AI, organisations can make progress that withstands operational, regulatory and political scrutiny.

Embedding AI in public services

The priority is not selecting an AI tool. It is deciding how AI becomes part of the operating model.

Key implications include:

  1. AI must be governed like any core public service function: Every AI-assisted decision must be traceable, challengeable and correctable.
  2. Integration matters more than innovation. Leaders need AI that connects into existing processes, data flows and quality assurance structures.
  3. Human judgement remains central. AI should remove friction but not responsibility. Staff are still accountable for decisions made in the name of the organisation.
  4. Progress must be defensible. Any approach must be able to withstand scrutiny from ministers, auditors and citizens. This requires evidence of incremental improvement rather than ambitious transformation claims.
  5. Partnerships must be practical. Providers must work within the real constraints of public service operations.

Recommended actions for the public sector

To make safe and meaningful progress, leaders should focus on a small number of high-value steps.

  1. Start within controlled environments: Prioritise AI-enabled improvements where processes already have strong governance, such as BPO-delivered services.
  2. Strengthen data foundations: Better-connected and better-governed data increases the value and safety of AI-enabled functions.
  3. Build internal AI literacy: Equip teams to understand how AI is being used, how outputs are validated and how risks are managed.

  4. Adopt reversible approaches: AI deployments should be easy to pause, reverse or adjust without disrupting services.

  5. Choose partners with operational understanding: Safe adoption depends on working with organisations that understand governance, accountability and citizen outcomes inside live operations.

In summary

Artificial intelligence is reshaping how citizens expect to interact with public services. The question for government is not whether AI should be adopted, but how it can be introduced safely, responsibly and in ways that reinforce trust. Senior leaders have a crucial opportunity to shape this shift. By embedding AI within live operations, strengthening accountability and progressing at a pace that services can sustain, the sector can deliver more resilient services, better citizen outcomes and greater confidence in the use of emerging technologies.


If you are rethinking how AI fits into your service strategy, you can find more practical insights that can help leaders move from intent to safe, confident delivery here: Capita Public Service.

Read the most recent articles written by Capita - Breaking down barriers: How the MoD is building AI confidence across defence

Share this page