Artificial intelligence is transforming business processes, driving growth and changing our world as consumers. A powerful form of computing technology, AI delivers critical insights by replicating the way humans think.
When technologies as disruptive as AI come along, questions are often raised about how it impacts us as citizens and what controls should be in place to ensure it is used responsibly. How AI could and should be used by law enforcement agencies is one subject worth exploring further.
An AI discussion paper released by the Department of Industry, Innovation and Science stated that AI-enabled predictions can deliver significantly more powerful, accurate, replicable and efficient results than those made by humans. This type of capability is especially useful for the public safety sector, where decision makers need to make fast judgements about how to respond to incidents, often while under extreme pressure. However, the broader use of AI within law enforcement needs careful exploration before it can be implemented at scale.
“AI can enable prediction and problem-solving approaches that save the lives of seriously ill hospital patients,” the Australian Human Rights Commission stated in a recent AI white paper. Yet the same paper also highlights its potential to compromise human rights. “For example, we have seen allegations of AI entrenching bias and discrimination in the United States criminal justice system.”
As the growing volume of data in the world today is harnessed for use by law enforcement, AI is no longer a phase – it is becoming an important tool for public safety investigations and management. However, knowing where and how AI is being used is important to ensure that the citizens it aims to protect understand how it can benefit them.
The business principles of AI
In law enforcement, video analytics powered by AI algorithms have the potential to vastly improve the time taken to complete important tasks – for example, to quickly find a lost child or senior citizen among a sea of people in a city.
However, in considering and building support for AI-based solutions, Paul Steinberg, Senior Vice President of Technology with Motorola Solutions, says there are three important principles that need to be applied.
First, the use of AI needs to advise humans rather than removing them and their judgement from the decision making process. In fact, AI should never be used to make consequential decisions without keeping a “human in the loop” to validate the findings of AI generated data.
Second, it should be applied in a highly focused way to fulfil specific business needs – for example, helping AI users to complete their tasks faster and more accurately. However, this should only be done within the context of the AI user’s typical workflow. Controls should also be in place to ensure those people using AI tools are doing so appropriately with the necessary compliance and restrictions in place.
Third, AI should use technologies that are proven and not experimental. This helps to ensure more predictable and reliable results while limiting risk.
Globally AI is helping law enforcement and emergency responders to simplify and streamline the way they work. For example, AI voice assistants not only enable officers to complete more tasks hands-free, but help them to work more efficiently by reducing the amount of routine paperwork and other manual tasks. Used in this way, AI can save officers considerable time which can then be reinvested back into policing within their communities.
AI can also be used to improve the effectiveness of resource allocation – for example, to support the onboarding of new staff, recruitment processes, training and even predictive policing.
When used to support emergency management, AI can also present data and analysis in visual ways, enabling decision makers to gain real-time insights to improve their decision-making.
“I’m really excited about AI, but what I’m worried about is that people misunderstand its potential so much that we could throw the baby out with the bath water, as the saying goes,” Steinberg said during a keynote presentation at the PSCR Public Safety Broadband Stakeholders Meeting in Chicago.
“In general, what I’m afraid of is that people assume AI equals facial recognition which equals tracking and monitoring. And therefore, that AI equals bad — and that couldn’t be further from the truth.”
The responsible use of AI
Engaging in open conversations about AI is important in changing assumptions about its use. AI should also be used both ethically and responsibly, Steinberg said. In law enforcement, the ethical use of any technology is best achieved when people have a say in its application and it supports clear business needs. This helps to ensure that AI delivers the right outcomes while garnering wider support from the public.
“The more tech we throw at people, one could argue that it’s actually not helpful, unless it’s done very, very carefully,” Steinberg said. “That’s where context becomes so important – understanding the instance of the human’s condition and adapting the communication and technology for that situation.”
For law enforcement, this means developing tailored software and services to deliver solutions and capabilities to meet specific needs. This will be a challenge for many public safety agencies, but focusing instead on clear objectives and outcomes will ensure AI is used in the right way. Additionally, once humans verify that AI generated results are accurate, AI can be gradually trusted to do more.
“At Motorola Solutions, we have a long heritage of delivering mission-critical grade communications and intelligence technologies to our customers. Now we are applying the same standard for all of the AI tools we develop and support. We see powerful potential for artificial intelligence to improve safety and efficiency for our customers, which in turn helps create safer communities,” Steinberg said.