Government is moving beyond simply experimenting with artificial intelligence and starting to use it in everyday work. There’s a growing recognition that AI can improve productivity and public services.
You’re now seeing AI built into how government operates – whether through tailored systems, ready-made tools, or everyday software like Copilot. In short, AI is becoming part of normal business.
But there’s a gap to close.
As AI use increases, organisations are running into some real challenges.
For example, using AI with personal data raises the bar for data quality and security. It can also make things more complex and costly. And while AI promises efficiency, that gain isn’t always straightforward – it should not just shift work elsewhere rather than reducing it.
There are wider knock-on effects too. Using AI externally can drive up demand, increase fraud risk, and create new cyber threats. So organisations need to think beyond individual projects and consider the bigger impact.
At the heart of it all is a simple point: AI only works well if the foundations are strong. And many organisations are still dealing with outdated systems and fragmented, low-quality data.
The National Audit Office has developed guidance to help audit and risk committees ask the right questions – focusing on where risks might arise and whether organisations are ready to manage them.
What should audit committees be asking?
To get value from AI and manage the risks, there are several key areas to focus on.
1. Innovation
Start with the basics: why are you using AI?
Is it about exploring new ideas, or solving a clear business problem? Is there a real opportunity that fits with your organisation’s priorities? And importantly, do leaders understand AI well enough to judge whether it’s worth the investment?
2. Strategy
Many organisations are still working this out.
Is there a clear strategic use aligned with organisational priorities? Does it distinguish between innovation and operational use? Is there proper coordination and oversight, rather than disconnected initiatives?
3. Leadership and skills
Senior leadership capability and skills are critical. This is a common pressure point.
Do senior leaders understand AI well enough to ask the right questions balancing innovation and risk? Is it clear who is accountable for using AI safely and ethically? And is there a plan to build and keep the skills you need?
4. Data
AI is only as good as the data behind it.
Do you know where your data comes from, how good it is, and how it’s stored? Is it actually suitable for the task? And have legal requirements been properly considered before using it? Are data limitations recognised before AI deployment?
5. Security
AI can increase your exposure to security risks- especially when interacting with legacy systems.
Are systems being designed with security built in from the start? Have you assessed AI-specific threats? And are risks from cloud services and third parties properly managed?
6. Pilots
Pilots are useful — but only if they’re controlled properly and evaluated.
Do you have a clear view of what AI is being tested across the organisation? Are risks like bias, data misuse, and security being managed? And do you have clear criteria for when to stop, scale up, or redesign a pilot?
7. Scaling
Moving from pilot to full rollout is often where things get harder.
Complexity and risk increase significantly at scale. Are your ambitions realistic, given your systems and data? Are you managing risks at an organisational level, not just in pilots? And do you have a solid plan for testing and integrating AI into existing processes?
8. Guardrails and guidelines
Clear safeguards are essential to manage ethical, legal and operational risks.
Have you set principles for ethical use, acceptable risk, and responsible behaviour? Are there controls to prevent harm, bias, or misuse? And are you checking that legal and data protection requirements are being met?
9. Workforce and culture
AI doesn’t just change tools — it changes how organisations work.
Are you thinking about how roles might evolve over time? Do you have a credible plan to upskill staff? And are you considering the risks, to early career development and long-term capability, such as over-reliance on automation or losing important organisational knowledge?
Yvonne Gallagher is the director of the National Audit Office's digital insights team