For AI to serve the public, we need comprehensive reform – Labour's data bill isn't the answer

The new law will quietly water down the imperfect backstop regulation that has limited the decisions which can be fully automated
Photo: Adobe Stock

By Joseph Summers

02 Jun 2025

More and more of government’s day-to-day decisions are being automated. Public authorities hope that by deploying technologies such as AI, they can deliver a massive reduction in the cost of administering public services. But as they are doing so, they are uncertain about how the rules which constrain human officials will apply to machines’ decision processes.

Now, a new law – the data (use and access) bill – is set to quietly water down the imperfect backstop regulation that has, until now, limited the decisions which can be fully automated. So, as this kind of automated state emerges, now is the time to consider how we might better regulate public decision-making by machine.

The challenge of uncertainty in an increasingly automated state

Gradually, we have developed principles of administrative law which aim to ensure that public sector decisions are fair and of high quality. Now, those rules ought to adapt to the emergence of automated decision-making, which can change our sense of what it means for a process to be fair.

Take, for example, the rule that human decision-makers should not act on irrelevant considerations. How should that apply where a decision is reached using a machine learning algorithm? Such systems can identify and act on patterns in their data that appear incomprehensible to a human observer. And if one is used to decide, for example, that a person’s benefits claim was likely fraudulent, how will they know whether the algorithm has only used relevant information?

Public authorities face considerable uncertainty in trying to answer questions such as these – and potentially disastrous consequences for getting the answers wrong. That is because adopting automated decision-making technologies can introduce new risks into decision-making processes. They can, for example, introduce new or bake-in existing systemic biases where systems have been trained on biased or insufficiently representative data.

"Automated decision-making technologies can introduce new or bake-in existing systemic biases where systems have been trained on biased or insufficiently representative data"

Data protection: an imperfect backstop

In the presence of this uncertainty, UK data protection law has acted as an imperfect check on how decisions are automated. Article 22 of the UK GDPR provides a relatively simple rule for determining whether it is permissible to automate significant decisions. Generally, significant decisions must be taken with the involvement of a human being (unless certain exceptions apply, such as the express consent of the data subject). The rule is far from perfect. Human involvement in an automated decision does not guarantee that it will be fair. A person can be involved in a decision but have only an ambiguous level of control over it: they might, for example, lack the technical knowhow to understand the automated system that they oversee; or they might simply lack the time to properly review decisions.

But whatever the shortcomings of this backstop in securing fairness, the data (use and access) bill does not remedy them. Instead, it limits the requirement for human involvement in significant decisions so that it only applies where a decision is taken using certain types of sensitive data (defined in Article 9 of the UK GDPR). What this neglects is that the effect of a decision (i.e. its potential to help or to harm) matters irrespective of the category of data it was based on. The bill’s new, more permissive rule is neither risk-based (in terms of decisions’ potential impact on the people they affect), nor based on the limits of available technologies. The effect of this change, therefore, is to water down the existing protection, whilst not offering any alternative means by which to guarantee fairness in the increasingly automated state.

The opportunity: a new public law for the automated state

The response to this change should not be to simply reinstate Article 22. A binary rule requiring human involvement in some decisions but not others will be decreasingly appropriate as public authorities become increasingly automated. Human involvement in a decision is better understood as a matter of degree which ought to be determined by the extent to which it calls for a distinctly human exercise of empathy, judgement or discretion.

Instead, what is needed is comprehensive, considered reform of the more flexible rules that govern human decision-makers: a new public law for the automated state.

Joseph Summers is a research fellow at Public Law Project

Share this page