To protect (be trusted) and serve

Julian Hayes and Andrew Watson examine the critical balance between human rights, privacy and the use of biometric technology by the police.

May 25, 2021

“It is vital the Government works to empower police to use technology to keep the public safe while maintaining their trust.”

With those words, the Home Secretary announced the recent appointment of Professor Fraser Sampson as England and Wales’ first Biometrics Commissioner and Surveillance Camera Commissioner. Combining two previously distinct offices, the new Commissioner will be responsible for promoting the appropriate use of biometric data as well as the overt use of surveillance camera systems by relevant authorities.

It is fair to say the professor’s tenure got off to a contentious start when, just two months into the job, he reportedly suggested that discretion rather than law should govern police use of facial recognition technology (FRT). While FRT arouses much controversy, it is just one form of a fast-growing range of technology available to law enforcement that harnesses the power of algorithms or artificial intelligence (AI) to achieve what, until recently, existed only in science fiction.

The Commissioner no doubt spoke for many people when he asked how, if certain technology was available, a policing body could responsibly not use it to prevent and detect crime and to keep people safe. However, such technological advances have developed without a dedicated legal and regulatory framework in place, raising serious concerns over privacy, fairness and human rights. Courts and legislatures are now beginning to grapple with the problem.

Algorithmic policing

Algorithmic policing technology falls into two broad categories: surveillance technology and predictive technology.

Surveillance technology automates the collection and analysis of data. Examples include facial recognition, social network analysis revealing connections between suspects, and more prosaically automated numberplate recognition systems.

Predictive technology uses data to forecast criminal activity before it occurs, allowing the police to intervene to apprehend suspects or prevent them from offending. Falling into this category are predictive mapping programmes such as PredPol, which identify crime ‘hot spots’, and individual risk-assessment programmes that seek to identify whether someone will re-offend. An example of this latter type is the Harm Assessment Risk Tool developed by Durham Constabulary to predict the likelihood that an individual will re-offend. Also falling into the category of predictive technology is emotion recognition, which analyses facial expressions to try to decode an individual’s mood and intentions, and is to be trialled by Lincolnshire Police.

Pros and cons

Whether it is identifying missing individuals, catching serial attackers or solving cold cases by scanning old CCTV footage, the use of AI by law enforcement is reaping benefits and offers a potential means of meeting society’s increasing expectations of rapid policing results despite increased pressure on overstretched budgets.

However, its rapid development and deployment by police forces across the world is also causing unease.

The problem of racial and gender bias in AI has been widely reported, with algorithms only as good as the source material on which they were trained. Bias in training datasets will be carried through when algorithms are used in the real world. Likewise, predictive crime mapping risks turning into a self-fulfilling prophecy with high levels of policing in perceived crime hot spots simply identifying more offending than in neighbouring areas where, in fact, crime levels are similar. Concentrating police resources on such hot spots can also lead to allegations of ‘over-policing’, jeopardising community cohesion.

Finally, some algorithms claiming to predict individual offending are known to have taken into account questionable indicators of recidivism such as postcodes. In a similar vein, the highly experimental emotion recognition programmes are based on potentially crude interpretations of facial expressions, which may not allow for unfamiliar cultural mores. In the long term, predictive policing of this nature may enable the diversion of individuals from a behavioural path which ultimately could lead them or others to catastrophe. However, in the short term, it has the potential to stigmatise law-abiding individuals and risks widespread monitoring and surveillance of innocent citizens to prevent offending.

International legal and regulatory models

As algorithmic policing has become more prevalent, some technology companies have themselves tried to agree common standards to alleviate the issues arising. During 2020, for example, IBM, Amazon and Microsoft all announced bans or moratoriums on selling FRT to law enforcement.

At a more formal level, an international hotchpotch of legal and regulatory models has sprung up. At one extreme is the Chinese model where, for example, facial and emotion recognition technologies have been encouraged and swiftly taken up and are now so pervasive that parts of the country resemble a surveillance State, with minor offenders identified and punished and whole communities tracked and incarcerated. At the other extreme, some US cities such as Boston, Portland and San Francisco have banned law enforcement from using FRT entirely.

In Europe, surveys suggest around half of EU Member States’ police forces use FRT, and a plan has been mooted to expand the Prüm data-sharing system beyond DNA, fingerprint and vehicle registration numbers to include facial images.

Legal challenges to the use of FRT have taken place, including in Italy where the SARI Real Time system, deployed by the Interior Ministry in migration and public order contexts, was recently ruled unlawful by the Italian data ombudsman on the grounds that it allowed excessive discretion to those using it and was insufficiently targeted.

The European Commission recently unveiled its much anticipated proposal for an Artificial Intelligence Act (AI Act) under which real-time remote biometric identification of individuals in publicly accessible spaces for law enforcement purposes would be banned except where its use was strictly necessary for a small number of specific purposes, including targeted searches for missing children, the prevention of a specific, substantial and imminent threat to life such as a terrorist attack, or for the identification of someone suspected of one of an exhaustive list of serious offences.

As well as being necessary and proportionate, the use of such technology for any of these purposes would also have to be expressly and specifically pre-authorised by a judicial or independent administrative authority, except in cases of extreme urgency where retrospective authorisation could be sought. That apart, the AI Act envisages law enforcement using certain ‘high-risk’ forms of AI for individual risk assessments, to assess the likelihood of re-offending, and to detect emotional states. Nevertheless, such use would have to conform to the EU Charter of Fundamental Rights, the GDPR (General Data Protection Regulation) and with compliance measures stipulated within the AI Act.

Homegrown approach

Closer to home, the Bridges case against South Wales Police provided the Court of Appeal with an opportunity to examine the sufficiency of the legal framework as it applies in England and Wales to law enforcement use of a particular type of automated facial recognition known as AFR Locate (AFR). In that case, the court found that too much discretion had been left to individual officers over who should be included on the AFR ‘watchlist’ and where the technology should be deployed.

Although the court stressed that the appeal was not concerned with the possible future use of AFR on a national basis, legal commentators have noted that, so far, it is the only serious guidance in this country on the use of automated decision-making systems by law enforcement, and indicates the sort of measures and requirements that regulators will have to consider when devising a workable framework going forward.

The Information Commissioner, who was an ‘Interested Party’ in the Bridges litigation, had previously urged the Government to bring forward as soon as possible binding national guidelines, though none have so far been published.

In the meantime, detailed ‘best practice’ guidance  issued in the aftermath of the Bridges decision provides a useful exposition of the existing rules in relation to the overt use of surveillance camera systems incorporating facial recognition systems, and may have provided welcome relief for police officers wishing to utilise the technology lawfully but left in a quandary by the patchwork of applicable law and regulation.

Conclusion

Whether it takes the form of surveillance or predictive technology, algorithmic policing arouses strong feelings. While campaign groups see it as a dangerous incursion into our privacy, public opinion seems more supportive, particularly when it is weighed against the risk that terrorists and other serious criminals might escape justice.

Rushing headlong into an age of hi-tech policing without the protection of a widely-accepted legal and regulatory framework to govern its use and guide those responsible for administering it risks unforced policing errors and may jeopardise confidence in law enforcement and the wider criminal justice system. Though running to catch up, courts and legislatures are finally taking steps to develop clear parameters for the use of such AI by law enforcement.

Baking-in transparency, accountability and fairness to the applicable framework is most likely to achieve the trust and support of the public for the use of algorithms in the policing of tomorrow.

Julian Hayes is a Partner at BCL Solicitors LLP.

Andrew Watson is a Legal Assistant at BCL Solicitors LLP.

Related Features

Select Vacancies

Constables on Promotion to Sergeant

Greater Manchester Police

Copyright © 2024 Police Professional