Ethical use of AI in investigations
Chris Johnson examines how police can harness the power of artificial intelligence while managing public expectations over its use.
Ever since laws were first created in ancient cultures, people have been breaking them in increasingly devious ways. This had led to an ongoing cat and mouse game between criminals and those who enforce laws, often involving a constant evolution of methods and technologies to evade the law, and to enforce it.
As criminal activity becomes more complex, global and technological, law enforcement professionals are seeking out more efficient ways to collect, process and analyse the vast quantity of information that is often involved in criminal investigations.
International travel and trade, electronic communications and financial transactions, and cyber-assisted crime, have made the jobs of investigation professionals far more complicated than they used to be. If we then add in public perception around the ‘art of the possible’, often as seen on TV, we encounter a mismatch of expectation and accepted capability. This is at a time where many criminal justice organisations are experiencing debilitating cuts in funding and personnel.
Fortunately, artificial intelligence (AI) technology is offering a range of applications to assist in criminal investigations. AI application enables law enforcement agencies to process and analyse vast amounts of data that even highly-trained investigators could not manage in reasonable timeframes, acting as a kind of force multiplier that can save time and money while improving accuracy, but more importantly saving lives.
But I do sometimes hear AI followed by the words “have you considered the ethics?”. When an AI platform highlights or explains its decision or course of action, surely this is ethical? After all this is no different than asking an analyst to explain how they came to a hypothesis or recommendation.
Let us take an example: internet data mining is one of the common applications of AI in criminal investigations. This kind of analysis is often used to exploit blind spots in criminal behaviour that can lead to information that helps investigators to identify, understand and collect evidence about criminal actors.
Pervasive social media use around the world has worked in the favour of criminal justice because most people (even criminals) engage in some kind of social media or other internet activity as part of their daily routines.
Recent research showed that more than half the world now uses some form of social media (58 per cent), 4.62 billion people are now on social media, with a whopping 424 million new users within the past 12 months. On average, two hours 27 minutes a day is the length of time we spend on social media.
It is physically impossible for a human to review, collect and understand all that data on a daily basis, which may, I add, changes every millisecond.
To use the proverbial ‘needle in a haystack’ analogy, how do investigators overcome this goliath challenge? The answer is AI. Special algorithms can analyse information from social media platforms, official records, financial activities and a host of other informational treasure troves.
One could argue that the use of AI is ethically sound if it can be deployed in way that is:
- Accountable – who used it and when;
- Targeted – for a specific purpose, for example, topic or individual;
- Contextual – provides the investigator reasoning or ‘explainability’; and
- Legitimate – that data is not misappropriated.
Chris Johnson is Sales Director UK and Nordics, Voyager Labs – Security and Public Safety. https://www.linkedin.com/in/christopher-johnson-477257110/