The UK is now deep into its third technology wave in government. First came “big IT”. Then digital transformation. Now artificial intelligence.
The rhetoric is familiar. AI will transform public services, raise productivity and free staff for more valuable work. The reality, as the National Audit Office keeps pointing out, is more complicated.
In a 2024 stocktake, the NAO found that around 70% of public bodies are already piloting or planning AI but activity is fragmented, early stage and built on weak foundations of data, governance and skills. These weaknesses cannot be addressed by bringing in armies of big consulting firm graduates.
The risk is obvious. If we rinse and repeat our current digital playbook, AI will simply hardwire existing complexity into the system while shifting more work onto citizens, rather than building a more productive state able to respond effectively to the needs of service users.
The first four industrial revolutions were about machines replacing human effort. Industry 4.0 has been about digitalisation and automation at scale. Industry 5.0, or the fifth industrial revolution, is framed very differently in the emerging literature. It is explicitly human centric. Technology is meant to collaborate with people, not just control them. Productivity is still important, but so are inclusion, ethics and human flourishing.
For the state, that should not be a mere rebranding exercise. Driving forward AI is a strategic choice.
If government treats AI as another round of Industry 4.0, we will get faster, more opaque versions of the systems we already have. If it treats AI as part of a fifth industrial revolution, the tests ought to be different. Are we using AI to extend human capability and judgement in public services, or to hollow them out? Can AI serve to ensure that public services are genuinely built around the needs of citizens and service users.
Three tensions will help to determine which way the UK goes.
1. Productivity or human value, or both
The NAO’s growing portfolio of evidence on digital transformation is clear that departments have tended to chase narrow cost efficiencies without addressing the underlying complexity of service pathways and delivery chains. Their 2023 review concludes that eleven digital strategies in 25 years have delivered “little lasting success” because they focused on the front end and left core systems and processes largely untouched.
Industry 4.0 logic would double down on that fundamental error. Use AI to automate contact centres, triage, casework and fraud checks. Cut headcount. Bank savings. Hope the system copes.
A fifth industrial revolution approach would pose a different set of questions. What are the outcomes that matter most to people? Where do human relationships and professional judgement make the greatest difference? How can AI reduce low value work and cognitive load, so that the human dimension of service delivery gets more time and attention?
The NAO has already warned that government rarely measures the total time and cost a system imposes on citizens and businesses. A strategic state would fix that first. It would judge AI not on departmental running costs alone but on whether it reduces the total human effort required to secure a fair tax bill, a benefit entitlement, a court judgment or a diagnosis.
Countries that have done digital well already think this way. Estonia’s e-government platform is estimated to save around 2% of GDP in working time each year, largely because routine interactions that once took hours now take minutes. Digital signatures and three-minute tax returns are not just clever tech – they are deliberate efforts to eliminate wasted human time. That is what a fifth industrial revolution lens looks like in practice.
2. Central control or human-centred place leadership
AI arrives in a state that is still highly centralised. Digital programmes have often reinforced that with centrally specified systems, centrally owned data and centrally driven metrics.
There is a risk that AI becomes another way of pulling power upwards. Centrally owned platforms that score risk based on a Westminster view of the world. Central systems that allocate resources based on averages rather than an accurate assessment of need. Local leaders thus become operators of algorithms they did not help design, constrained by risk frameworks they did not shape with resources they cannot flex.
Industry 5.0 thinking points in the opposite direction. It emphasises collaboration between technology and human workers, and between central and local institutions, to promote individual well-being. For public services, that means:
- National standards on safety, ethics and transparency
- Shared data infrastructure where it makes sense
- But genuine scope for places to adapt AI to local context and most importantly, to switch it off when it harms trust or outcomes
The NAO’s own findings on data underline why this matters. Many of the biggest gaps in health, policing and local services are local data gaps. You cannot build useful, fair AI for health, education, social care or neighbourhood policing from Whitehall alone. You need the people who understand streets, families and services in the neighbourhoods and communities where lives are lived.
That is also where trust is earned. The information commissioner’s recent concerns about racial bias in police facial recognition technology are an example of a warning that will no doubt be repeated. False positive rates for Black and Asian people many times higher than for white citizens will destroy public confidence if not fixed and governed properly. A fifth industrial revolution state would treat that as a systemic design problem, not a marginal technical issue.
3. Experimentation or stewardship
The NAO has started to talk differently about risk. The comptroller and auditor general has publicly argued that the public sector will have to take more managed risk, learn faster from experiments and accept that some AI pilots will fail if we are serious about innovation and productivity.
That is a welcome shift. A strategic state cannot be paralysed by fear of failure. But experimentation without stewarding principles will simply add another layer of complexity.
Industry 5.0 gives one way of framing those principles:
- Human agency: AI supports decisions, it does not make them unaccountable
- Human dignity: AI should not dehumanise interactions with public services or strip people of the ability to challenge outcomes
- Human capability: Over time, AI should make front line professionals more capable, not less
The NAO’s 2024 report on the use of AI in government effectively says the same thing in audit language. Government is at an early stage, strategies are being drafted but there is no consistent view yet on where AI is appropriate, how to govern risk, or how to develop skills. That gap needs to close quickly.
What the centre should do now
A strategic state might approach AI as an accelerator for improving government and public service effectiveness as follows:
Define a small number of human centred AI missions.
Earlier detection of harm in health and social care. Lighter, more accurate compliance for honest taxpayers. Fewer wasted and convoluted steps in access to justice. Clear missions will give shape to investment and gain greater traction with industry.
Put simplification and data quality ahead of AI spend.
No AI tooling should be mooted as a solution for a service that cannot identify, interrogate and derive meaningful insight from its own data or explain its own rules.
Build human plus machine teams, not just tech stacks.
Invest in frontline training and service and process redesign work so that AI removes rote tasks and noise, rather than adding functionality that nobody has time to use. Use operating model lenses as a way to understand the whole.
Mandate whole system impact assessments.
AI expertise across government is emergent and fragmented. If Cabinet Office and HMT are to be given the mandate to sign off AI programmes, then they should not do that first without building the right expertise in sufficient depth and numbers and without explicit analysis of the impact on citizen time, localised workloads and wider distributional effects. This kind of analysis should be standardised within business cases.
The alternative is straightforward: AI becomes another way of digitising complexity and wrapping an already stretched state in a thicker layer of software that doesn’t enhance productivity.
The promise of the fifth industrial revolution is different. It is that we can use technology to rebuild a more human centric state – one that values time, relationships and trust as much as throughput and financial efficiency. Whether AI helps us get there depends entirely on the choices the centre makes now.