Demystifying Artificial Intelligence for investigations

Jul 23, 2018
The technology's potential lies in pattern recognition and highlighting connections for human attention.

Cityforum – the think tank with a long history of facilitating senior conversations about technology in policing – hosted a roundtable in June. The sessions covered technology, data, the IoT, we were struck by the level of confusion in the air. The role of artificial intelligence (AI) in policing is poorly understood, both inside the community and in society at large.

The RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) polled 2,000 members of the public and found only 9% knew that AI decision systems were even involved in policing, and a mere 12% supported AI’s use.

Despite this lack of awareness, respondents believed AI lacked the empathy required to make decisions affecting people and communities (61%), reduced the responsibility and accountability of investigators (31%), and was poorly overseen and regulated by government (26%).

Even if improvements were promised, only 26% of respondents expressed comfort with the future role of AI in policing and investigation – compared to 64% who were definitely not comfortable at all.

Inside investigations teams and the general public, people are still unsure about what AI is, how it works and the benefits it holds for investigations.

So what are some common myths about AI in police work – and what’s the reality?

What AI is – and isn’t

AI is the media-friendly term for machines doing what humans do: sensing, reasoning, reacting and adapting to change, to the point where they’re beating humans at complex games like Go. They do it through ‘machine learning’ – which means predicting patterns, through a method called statistical inference.

Consider Amazon’s Alexa. It doesn’t understand words as such – it learns patterns of sound, and it’s programmed to react to them in particular ways, but there’s nothing there that can grasp meaning.

What machine learning can do, however, is help humans achieve tasks faster and more accurately, by speeding up pattern recognition and highlighting connections for human attention. Analysis techniques such as logistic regression and linear regression can be carried out much more efficiently by AI, while interpreting the findings remains a human responsibility.

AI in investigation is already here

When you think about the reality of AI as opposed to the sci-fi version, understanding that it learns patterns and makes programmed responses, you can immediately see the practical applications for police teams. Some of the specific capabilities of AI align completely with the needs of investigators.

AI technology, for example, helps investigators by automatically identifying and classifying key information within large datasets like the inbox, file storage, laptop and mobile phone download of every suspect within an investigation. Investigators simply don’t have the resources to manually search all the digital data that could be associated with a case.  

What is AI

Through a process often referred to as automated entity recognition, AI technology can be trained to spot key investigative information (such as names, telephone numbers, organisations, addresses, dates and events) and then extract and present this information in a meaningful way to help uncover new lines of enquiry for investigators.

AI technology also helps investigators on an ongoing basis during the course of enquiry.  Investigators can teach AI technology to continually and automatically cross-reference any new material as it surfaces with the data that already exists. That car registration that was logged six months ago as part of a separate stream of activity suddenly becomes relevant as it matches with a number found in recent CCTV analysis that arrived into the investigation.

Other practices have adopted AI for similar reasons – the legal and medical sectors have found AI capable of reviewing the extensive, convoluted case histories with which they work.

What’s next for AI?

The future for AI is an incremental picture of improving existing capabilities – an evolution of machine learning, not a world-shaking leap.

Scanning and tagging of images and videos is likely to speed up evidence reviews, answering key questions more simply. If an investigator needs to know if a suspect or witness was where they claimed to be, they currently have to review CCTV footage manually. As image scanning improves, AI will get even better at transcribing CCTV footage and highlight still images of interest for the investigator to focus on.

Causal analysis is another area of interest – the next step in pattern recognition. Put simply, causal analysis is a way of mapping cause and effect relationships between the events that make up a situation: it records the way in which situations build up. AI can use these insights to make consistency and plausibility checks, by highlighting the most plausible events. For example, they can review phone records to verify a suspect’s alibi or find co-conspirators, based on evidence learned from similar cases.

Causal analysis can also be used to predict the likely outcomes of similar events – the common factors which cause criminal events. With enough data available, AI can trigger alerts based on the behaviour learned from previous investigations, understanding the modus-operandi of a suspect, pointing investigators towards likely outcomes and enabling better resource allocation as a result.

There’s nothing here that isn’t within the remit of human investigators – learning the patterns that crimes have in common, reviewing extensive evidence logs, and assessing suspect behaviour based on known information. What the AI brings to the table is  scaling this to ever increasing amounts of data, eliminating the time-consuming stages of poring over evidence by hand. AI improves by incremental progression, as programming and processing improve – moving toward the future one step at a time.

Don’t expect to turn up to work one day – in five, ten, or thirty years’ time – and be introduced to a cyborg colleague. It’s more likely that you’ll be working with humans: representatives of clever, agile technology companies, speaking in plain English, working with investigators to save time and support decisions.

Related Resources

Book a demo

Book a demo

Find out how Clue can help your organisation.