Bias in algorithms to be explored in new government study
The Government has launched a new research study to examine if the use of algorithms in the criminal justice system generates bias.
Algorithms – instructions for performing calculations or solving problems – are usually created using historical data and identifying patterns in previous decisions to help make future decisions.
Algorithms are increasingly being used to assess the likelihood of re-offending and inform decisions about policing, probation and parole. For example, the Harm Assessment Risk Tool is being deployed by Durham Constabulary to assist officers in deciding whether an individual is eligible for deferred prosecution based on the future risk of offending.
Although it is accepted that algorithms have huge potential for preventing crime, protecting the public and improving the way services are delivered, decisions are likely to have a significant impact on people’s lives.
There are concerns that any human bias in the historical data used to create the algorithm can be reflected in recommendations made by the algorithm.
These concerns will now be addressed by the Centre for Data Ethics and Innovation (CDEI), a group set up to make sure data-driven technologies and artificial intelligence are used for the benefit of society, working in partnership with the Cabinet Office’s Race Disparity Unit.
Speaking at a Downing Street event to mark the publication of the centre’s first work programme and strategy setting out the CDEI’s priorities, Digital Secretary Jeremy Wright said: “Technology is a force for good which has improved people’s lives but we must make sure it is developed in a safe and secure way. Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.
“I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the centre’s recommendations to government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society.
Roger Taylor, chair of the Centre for Data Ethics and Innovation, said: “We want to work with organisations so they can maximise the benefits of data driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.”
The launch coincides with the release of a new study by the Police Foundation, which shows that the police are failing to recognise the public’s concerns about how their personal data is used in the fight against crime.
The Police Foundation looked at national and local digital data policing projects. It found that while the police service is using new technology in ways that benefit public safety, the service has been too slow to consider the risks of ‘technology creep’.
The report’s author Dr Ian Kearns said: “One of the most striking features of the debate on data-driven policing in the UK is the absence of any formal mechanisms for including the public’s voice. This is a critical gap which, if not filled, could undermine public confidence in this way of working.”
The issue of data-driven policing is explored in depth in this week’s edition of Police Professional.