Data ethics centre and Race Disparity Unit to investigate algorithm bias

Centre for Data Ethics and Innovation will look at how algorithms could reflect the biases of their developers


Photo: Pixabay

The government’s data ethics centre and the Cabinet Office’s Race Disparity Unit will launch a joint investigation into the potential for bias in algorithmic decision-making, it has been announced.

The Centre for Data Ethics and Innovation will examine how human biases can affect the outcomes of decisions made using algorithms in the criminal justice system, local government, recruitment and financial services, it said in a strategy document published this week.

The strategy sets out CDEI’s priorities for the next two years, and is the first such document to be published since its board met for the first time in December. The strategy also sets out the centre's plans for public engagement to help people better understand how data-driven technology is used, in the hope of creating a “trusted and trustworthy environment for innovation”.


RELATED CONTENT


Algorithms are sometimes used in a bid to rule out unconscious bias in decision-making, but there is there is growing concern about the potential for developers to inadvertently embed their own biases into algorithms they create. There have been reports of algorithms used to screen CVs displaying gender bias, for example.

The Race Disparity Unit, which analyses government data on the experiences of people from different ethnic backgrounds, will work with CDEI on the project to explore the risk of people being discriminated against according to their ethnicity in decisions made in the criminal justice system. Together, the two bodies will come up with a set of recommendations for government.

Announcing the investigation, the government said there was scope for algorithms to be used to assess the likelihood of reoffending and inform decisions about policing, probation and parole.

Some police forces have already started using algorithms to inform decisions, it said, including one force that is using them to help assess the risk of someone reoffending, and whether they should therefore qualify for deferred prosecution.

The science and technology select committee of MPs is among the groups that have raised concerns about how people might fall foul of biases in algorithm-based decisions. In a report in May last year, it called for government to give citizens a legal “right to explanation” which would mean they were entitled to know how decisions affecting them had been made.

In its response in September, the government acknowledged that the civil service needed to be transparent about how it used algorithms, although it did not commit to producing a list of where algorithms “with significant impacts” were used, as recommended by the committee.

In a statement accompanying the report’s publication, digital secretary Jeremy Wright said CDEI would help to ensure that technology used to improve people’s lives would be developed “in a safe and secure way”.

“I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services.

“I look forward to seeing the centre’s recommendations to government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society.”

The centre, which was first announced in the November 2017 Budget, is funded by the Department for Digital, Culture, Media and Sport but is independent for government. It is currently acting as an expert committee, but the government has said it will give it statutory powers in future.

Last month the chair of the CDEI said bias would be one of the main topics the centre would focus on in its first few months, along with micro-targeting in advertising.

Speaking at the Public Sector AI summit hosted by CSW's sister title PublicTechnology and parent company Dods, Roger Taylor said CDEI would examine how best to test AI systems to ensure that the public “believe that the governance of the system is in line with societal values”.

This came after prime minister Theresa May’s speech at the World Economic Forum in Davos last year, when she said one of the centre’s tasks would be to help establish rules and standards to ensure artificial intelligence is used responsibly, “such as by ensuring that algorithms don’t perpetuate the human biases of their developers”.

Read the most recent articles written by Beckie Smith - Physical training campus for civil servants 'eminently sensible', PM says

Share this page