By Sam Trendall

18 Oct 2019

Artificial intelligence could help solve the public sector’s most complex problems, but how can policymakers navigate the legal, ethical and technical problems it presents? Sam Trendall talks to experts from the Alan Turing Institute about their programme to help government get to grips with AI


In recent years, automation and artificial intelligence have been increasingly under the spotlight. Sceptics raise concerns about everything from increased surveillance to unintended bias in decision making, while enthusiasts speak of unseen opportunities and transformed productivity. Government is chief among those turning their attention to AI.

The AI Sector Deal last year – which contained £406m of government backing for the recruitment of an additional 8,000 computer science teachers, and the creation of a National Centre for Computing – is one of a number of big-ticket announcements made in the last 18 months.

The Office for Artificial Intelligence is a new government entity jointly run by the Department for Business, Energy and Industrial Strategy and the Department for Digital, Culture, Media and Sport. 


RELATED CONTENT


Another new organisation under the watch of DCMS is the Centre for Data Ethics and Innovation, which has a remit to “develop the right governance regime for data-driven technologies” – chiefly AI. Its first two areas of focus are targeting and bias; the former relates to the growing influence of highly personalised and directed online adverts based on individuals’ data, and the latter to the risk that using datasets weighted towards one group or characteristic will embed and industrialise existing human and systemic biases (see box).

In June, the Office for AI and the Government Digital Service jointly published the Guide to using artificial intelligence in the public sector.

The creation of the guide – which looks at how to assess, plan, and manage the use of AI, before moving on to how to do so ethically and safely and, finally, cites existing government examples – was created in partnership with the Alan Turing Institute: a network of 13 universities that together form the UK’s national academic institution for data science and AI.

The Turing Institute provided the section of the guide dedicated to ethics and safety. Dr Cosmina Dorobantu, deputy director of the institute’s public policy programme, tells CSW: “We’re seeing more and more that government departments or policymakers don’t come to us just with technical questions, but also with ethical questions. For example local authorities around the country, because of budget cuts over the past few years, have struggled to have enough social workers to identify children who are at risk. So they’re looking more and more towards implementing a machine-learning algorithm that would be able to identify those children.”

She adds: “The question there isn’t so much can you write that algorithm – because you can; [although there is] a debate as to how accurate those algorithms can be. But the more important question is whether you should write that algorithm in the first place. We are seeing a lot of those questions, and I don’t think many of those organisations are equipped to deal with them. In local authorities, for example, it’s usually the person who administers the IT system – and they’re not in a position to answer.”

Examining the ethics of using machine learning in children’s social care is the focus of one of the research projects the public policy programme is running. 
Another project is looking at possible uses of AI across the criminal justice system – from identifying offenders to improving prisons.

 

“Government has massive resources of transactional data but, traditionally, hasn’t been very good at using it to make policy – they’ve relied on surveys and other forms of data” Helen Margetts, Alan Turing Institute

The Turing Institute is also collaborating with regulators, including working with the Information Commissioner’s Office to deliver Project ExplAIn which, according to the interim report published in June, seeks to offer “practical guidance to assist organisations with explaining AI decisions to the individuals affected”.
A detailed “explainability framework” is due to be published imminently.

“Over the last year we ran a series of citizen juries, which were really interesting, and it was fascinating for us to get their views on what type of explanation they will want from an AI system that is informing a decision about them,” Dorobantu says.

The public policy programme director, Prof Helen Margetts, is leading a project examining the role that AI could play in tackling hate speech. Most previous work in this area has, she says, been done by online platforms themselves, who “are doing it in a very kind of reactive sort of way – they just don’t want it to give them a bad reputation”.

She says: “The way research in the past has developed in this area is that there are lots of tools to tackle one sort of hate speech, at one point in time, on one platform, targeted at one category of person. And then these tools are built, and somebody writes a paper about it, and then they sort of chuck it over the wall. And there’s a big pile of papers on the other side of the wall!”

One problem, Margetts explains, is the quality and availability of data which could inform the machine learning algorithms of these tools. 

She adds: “There’s a huge shortage of data, and there’s a huge shortage of training data; you’ve got literally thousands of tools based on, say, 25 datasets – which are in themselves not that great. So, as well as developing our own classifiers, we hope to be able to do some synthesis work to make those tools available to researchers and policymakers, so that they can actually know which are any good, and which can be used.”

Leading the way

Margetts says that, although government has a tendency to be “left behind” by emerging technology, AI presents “possibilities for innovation in which government has unique expertise”.

However, it has not always made use of the data that is a necessary by-product of the huge number of interactions government has with citizens.

“It has massive resources of transactional data and, traditionally, all governments haven’t been very good at using [that] to make policy. They’ve relied on surveys and other forms of data. If all your data about citizens is kept in huge filing cabinets, it’s very difficult and very labour-intensive to process that data. But now digital data is a by-product of government. Because there’s no history of using that transactional data, public agencies tend not to use it – but there’s huge potential now for them to do so.”

Over the last decade, more judicious investment in IT has often been characterised by government as a way of saving taxpayer money. But Dorobantu says that this is an unhelpful way to approach AI.

“If you’re looking at it as a way to save money, you’re not going to be investing enough in it to actually make it work the way it should,” she says. “I think the focus should be on seeing it as an opportunity to provide better public services and to improve policymaking, as opposed to as a way to cut costs.
“We’re moving in that direction – but I think the resources need to follow the enthusiasm.” 

A bias view

The risk that AI and automation will not eradicate human bias but will, rather, embed and industrialise it, is seen by many as the biggest problem in using the technology in the delivery of public services. 

Eleonora Harwich, director of research at think tank Reform, says that bias can germinate in several ways.

Data quality is a significant cause, she says. Anyone using health data should assess it against the common definition of the six dimensions of data quality: completeness; timeliness; consistency; integrity; conformity; and accuracy.

But even that is no guarantee of eradicating bias. “Are you going to take the view that the data you are using is objective? Obviously, there are laws of science and physics, but I do think that the ways we go about collecting data are a social construct – and there are many different ways of measuring its purity, ” Harwich says. “If your data is from [NHS] trust A – which does not have a representative population – and you’re then using that data to train the model which is then used by trust B or trust C – it will not work.”

While acknowledging that bias presents a “big problem” for public sector use of AI, Margetts also believes that increased access to data presents a chance to combat it.

“First of all, we’re seeing this bias as explicit for the first time, because we have the data. We didn’t used to have the data, so we couldn’t tell you whether people were disproportionately sentenced or suspected because of their race, or hired because of their gender, for example,” she says. “The other thing is: we might be able to do something about it… [before], we didn’t know so much about the decision-making process, and we couldn’t delve in or tweak it. But, now, we might be able to.”

This article forms part of a special AI focus on our sister site PublicTechnology. Click here to read the exclusive features, interviews, analysis and multimedia content published during the week.

Read the most recent articles written by Sam Trendall - MPs press DWP to reveal impact of AI on benefit claims

Share this page