Successful adoption of AI in government needs a clear vision and practical steps to implementation, according to two government representatives and two speakers from large tech firms at Capita’s recent summit.
With the launch of ChatGPT in late 2022, innovation was democratised with the ability for anyone to send instructions in natural language and create speech, text, images, and video.
This democratisation is the key opportunity for government, offering entire organisations and citizens the tools to build new things.
However, there needs to be a central, governed platform, otherwise people will find their own way – leading to ‘shadow AI’ that isn’t authorised or is tried and abandoned.
Becoming a ‘frontier firm’
The Government has unveiled a £47bn investment plan in AI, which is dependent on ramping up AI adoption.
One tech speaker described the imperative for government departments to become ‘frontier firms’ – on the journey to embed AI in pretty much everything.
This happens in three stages:
- Using software with embedded AI, e.g. Copilot chat, that’s free to use. This is how people will build skills, so the tech speakers recommended giving it to as many people as possible.
- Become an ‘agent boss’ using agentic AI that can make decisions and have a human in the loop to action next steps, and provide oversight and control.
- Evolve to ‘agentic organisations’ where one person oversees a host of automated agents, doing basic repeatable processes.
How things can change
The speakers shared several use cases that could be explored across government.
For example, one government speaker mentioned a chat app that been trialled to help users interact with 700,000 pages of content more intuitively. Another described a pilot to look at agentic flows where people can get support looking for jobs or educational opportunities that fit around their life and commitments.
A speaker from a large tech firm talked about helping government departments to reimagine process flows from the perspective of customer experience and satisfaction. He described typical calls to a regulator, sometimes about whistleblowing, which can last up to 30 minutes. An AI summary transcript can save an agent 10 minutes after the call – but still there was potential to transform the service for the citizen.
So how can it be pushed further?
What was taking most of the employees’ time, and impacting the customer experience, was what happened after the call – including retrospective call assurance on quality, and compliance teams deciding whether they need to follow up.
The speaker posed the possibility of a real-time quality assurance process, with QAs looking at the live call log, then advising the individual how they can change the course of that call, so there is no need for a retrospective process that can delay action.
A second tech speaker also encouraged delegates to think about how roles will change with AI. He revealed that his software engineering team are finding that Github Copilot consistently writes better, more secure code than developers, so those developers are focusing further up the value chain. As that tech matures, the company is having to think about the whole profession – what does the software engineer of the future look like? They may be prioritising critical thinking, creativity, and how to solve difficult problems using AI.
And adapting to this fundamental role change is a multi-year journey.
Steps to implement the vision
The speakers were asked how government can support staff towards a future-ready vision, and they responded with numerous opportunities – and challenges too.
Firstly, shifting mindsets about the business case for AI – for example, one recommendation was to try not to think about AI adoption as tech spend, but instead see it as investing in ‘digital labour’ that can help people.
Secondly, a people focus is essential – encouraging employees to focus on AI use cases that automate or speed time-consuming parts of their job, e.g. taking and writing up meeting minutes, or day-to-day correspondence.
And thirdly, a cultural change: for example, enabling radical autonomy with a small number of people from different directorates being given AI with guard rails, and permission to explore how they could transform services. Another shift a tech speaker encouraged was resetting people’s expectations of AI. Rather than using it to do current processes faster, they should demand that AI delivers results in days rather than weeks – this can surprise everyone.
Finally, another change in perception: computer systems soon won’t involve keys and fields to populate, following a familiar workflow. Instead, we’re going to ask the computer for what we need. One speaker described AI as a smart but flawed assistant that helps people do their job, which needs to be asked the right questions and prompted to fact check the output.
The need for visionary change
AI is something government needs now. Both local government and the NHS need to drive huge efficiencies through digital tech.
Yet the speed of adoption can be constrained. Government needs to embed new mindsets to speed things up, deliver new services faster, and compare it to the right thing rather than the status quo.
All the speakers agreed, if you haven’t yet experimented with AI, you need to do it fast and be bold with what’s possible.
You can read the first article in this series here: Understanding the AI opportunity in government
Capita is hosting a series of free-to-attend AI bootcamps facilitating open, honest conversations about your hopes and challenges, and how we could help you.
To register send an email to bettergovernment@capita.com