Some priority areas for the Centre include:
- Building knowledge on the possible malicious use of AI by criminals and terrorist groups, as well as potential counter-measures.
- Enhancing awareness of the threats of AI-generated or manipulated voice or video content, such as deepfakes.
- Fostering responsible AI innovation within the law enforcement community.
- Promoting and supporting the development of policy frameworks for the deployment of facial recognition software.
- Exploring the development of pilot AI applications in criminal investigations, in particular to combat the rise in online child sexual exploitation and abuse.
- Enhancing cybersecurity through the use of AI to support the detection and investigation of and protection from cyberattacks
- Building knowledge on the use of AI in counter-terrorism, in particular in the context of terrorist use of the internet and social media
- Analysing the possible application of AI in the administration of criminal justice and corrections administration.
Future proofing the criminal justice system
Crime prevention, criminal justice, and in particular law enforcement and national security, are areas where AI and related emerging technologies have the potential to compliment or even greatly enhance traditional techniques. Given the increasingly data-heavy nature of criminal investigations and the evolving and complex nature of criminality, the criminal justice system is a domain that can derive substantial benefit from the potential of new and emerging technologies.
AI has already been used to help law enforcement to identify and locate long-missing children, scan illicit sex ads and disrupt human trafficking rings, flag financial transactions that indicate the possibility of money laundering and protect citizens’ privacy through automating the anonymization of surveillance footage. Such technologies may find application in the courts, where they can help with efficient research on jurisprudence to identify precedents and support legal professionals with case management to ensure a timely delivery of justice.
Masked behind these benefits, however, are a range of social, ethical and legal issues that have yet to be fully explored and analysed. For instance, there are concerns surrounding data collection and violations of the right to privacy in AI development, algorithmic bias and black boxes in decision-making systems,
and unforeseen outcomes such as from the autonomous use of force. Of course, there is also the ever-present risk that criminals or terrorist organizations may misuse these technologies. Indeed, with every new technology comes vulnerability to new forms of crime and threats to security. However, with proper understanding and responsible development, the Centre continues to aim to build trust and belief in AI and robotics as agents for positive change.