Lord Evans: AI can be game-changing for public services – but reassurance needed on how it will be used

The UK’s regulatory framework for AI in the public sector is "still a work in progress", the Committee on Standards in Public Life chair says


Lord Evans. Photo: Committee on Standards in Public Life

By Lord Jonathan Evans

10 Feb 2020

Today, the Committee on Standards in Public Life is publishing its report on artificial intelligence and public standards.

Over the last year, we have spoken to tech specialists, policy professionals, academics, legal experts and private companies to examine how those working in the public sector can continue to demonstrate high ethical standards when working with artificial intelligence.

How does this new technology fit with the Nolan principles of honesty, integrity, objectivity, selflessness, leadership, accountability and openness? What reforms in AI policy, governance and regulation are needed to ensure standards will be upheld as public services try to make the most of this new and exciting technology?


RELATED CONTENT


AI is game-changing technology. Developments in machine learning promise to transform the way decisions are made right across the civil service. But any change in how public services are delivered must not undermine the values that underpin public life. It is clear that the public needs reassurance about the way AI will be used.

As AI becomes more widely used across the public sector, the risks to the widely-accepted principles of openness, accountability and objectivity are obvious.

On openness, we found that the government was failing. The public sector is not sufficiently transparent about its use of AI and we found it very difficult to find out where and how new technology is currently in use. Transparency is a critical first step for building public trust and confidence in this technology and we are calling for more proactive disclosure of the use of algorithmic systems.

"On openness, we found that the government was failing. The public sector is not sufficiently transparent about its use of AI and we found it very difficult to find out where and how new technology is currently in use"

Explaining AI decisions will be key to accountability and many have warned of the dangers of ‘Black Box’ AI. But our review found that more explainable AI is a realistic and attainable goal for the public sector – so long as government and companies delivering services to the public prioritise public standards when designing and building AI systems.

Data bias risks automating discrimination and undermining the important public service principle of objectivity. Our committee found this a cause for serious concern. Technical “solutions” to bias do not yet exist, and we do not know how our anti-discrimination laws will apply to data-driven systems. Civil servants using this technology must be alive to the way in which their software solutions might impact differently on different communities – and act to minimise any discriminatory effect.

All in all, the UK’s regulatory framework for AI in the public sector is still a work in progress and there are notable deficiencies. The committee makes a number of recommendations to push government and regulators in the right direction.

We do not believe that a new AI super regulator would solve these problems. Instead, we argue that it is incumbent on every organisation using AI to step up.

We agree that the new Centre for Data Ethics and Innovation should act as a centre advising government and regulators in this area. The recent guidance published by the Office for AI and Government Digital Service is also a significant step forwards in AI governance.

The GDPR – supported by excellent recent guidance from the ICO – provides safeguards on explanations and upholding the role of public officials in a decision-making process. The Equality Act, including the Public Sector Equality Duty, provides a strong legal foundation to prevent discrimination via algorithm.

Our review showed that standards issues around the deployment of AI should be analysed within the context of general risk management – something the civil service is well versed in – and any risks to standards considered and actively mitigated.

The governance measures we recommend should not come as a surprise to those working in this sector – they include maximising workforce diversity, setting clear responsibility for officials involved in an AI decision-making process, and establishing oversight of a whole AI process.

We also recommend that government makes greater use of its market power and demands more from the tech companies that want to develop and supply public sector systems. Those firms need to know the ethical standards expected so that they can design them in right from the start.

AI offers exciting new opportunities for public services – from pothole detection and traffic light optimisation to personalised care. My hope is that the recommendations in our report will allow the public sector to innovate using AI, fully confident that the standards the public expect are being met.

Read the most recent articles written by Lord Jonathan Evans - Public sector leaders need to keep talking about standards

Share this page