Experts from academia, the private sector, the police, the government and state security agencies identified and ranked 20 AI enabled crimes in order of concern. Fake audio and video content topped the list and was perceived to be the biggest threat.
In this guest blog, Florence Greatrix at UCL STEaPP’s Policy Impact Unit looks at different types of AI enabled crimes and their impacts.
AI technologies have the potential to be exploited for criminal purposes. In order to ensure that society is prepared to tackle these new threats, researchers in the UCL Dawes Centre for Future Crime set out to identify what these might be and to understand how they could impact our lives. The findings have recently been published in the journal Crime Science and in an accompanying policy briefing.
AI has potential for crime detection (such as sensors involved in smart cities to detect gunfire) and prevention (such as using AI and data to predict crime spots based on data from previous incidents). But AI technologies can also be exploited for criminal purposes. As many of us now conduct large parts of our lives online, online activity can make and break reputations, providing an ideal environment for criminal exploitation. As lead investigator Professor Lewis Griffin put it: to adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
Audio and video impersonation was ranked most highly and therefore perceived to be the biggest threat. This could be impersonation of children to relatives over video calls to gain access to funds (this type of crime has been seen in Mexico, but with actors playing the role of relatives rather than a fake video), phone conversations to request access to secure systems, or fake video calls of public figures speaking or acting in a different way to attempt to influence public opinion, such as this fake video of Donald Trump. As well as technology advancements making this more achievable, there are many uncontrolled routes for fake material to spread, such as through social media.
Which other crimes were of biggest concern?
Driverless vehicles as weapons: Motor vehicles have long been used both as a delivery mechanism for explosives and as weapons of terror in their own right. Fully autonomous AI-controlled driverless vehicles are not yet on the road, but numerous car manufacturers and technology companies are racing to deliver them. AI could increase vehicular terrorism by reducing the need for driver recruitment, enabling single perpetrators to perform multiple attacks, or even coordinating large numbers of vehicles at once.
Tailored phishing: Most people are familiar with phishing – a social engineering attack that aims to collect secure information or install malware via a digital message purporting to be from a trusted party, such as a bank. The attacker exploits the existing trust to persuade the user to perform actions theymight otherwise be wary of, like revealing a password or clicking a link.AI has potential to improve the success rates of phishing attacks by crafting messages that appear more genuine, and to discover ‘what works’ – by varying details of messages to “experiment” at scale and at almost no cost.
Disrupting AI-controlled systems: As AI systems become ever more essential (in government, commerce and the home), opportunities for attack will grow, leading to many possible criminal and terror scenarios arising from targeted disruption of these systems. This could include a wide range of scenarios such as widespread power failures, traffic gridlock or a breakdown of food logistics. Systems with responsibility for public safety and security and systems overseeing financial transactions are likely to become key targets. However, this crime may be very difficult to achieve as attacks typically require detailed knowledge of, or even access to, the systems involved, which may be hard to obtain.
Large-scale blackmail: Traditional blackmail involves pressure under the threat of exposure of evidence of criminality or wrongdoing, or embarrassing personal information. Acquiring this evidence is a limiting factor: the crime is only worthwhile if the victim will pay more to suppress it than it costs to acquire. AI can be used to do this on a much larger scale, harvesting information from social media or large personal datasets such as email logs or browser history, then identifying specific vulnerabilities for a large number of potential targets and tailoring threat messages to each.
AI-authored fake news: Fake news is propaganda that aims to appear credible by being, or seeming to be, issued from a trusted source. As well as delivering false information, fake news in sufficient quantity can displace attention from true information. AI could be used to generate many versions of a particular piece of content, apparently from multiple sources, to boost its visibility and credibility; and to choose content or its presentation, on a personalised basis, to boost impact.
Other crimes were considered to be of lower concern, such as ‘burglar bots’ – small robots used to gain entry into properties through access points such as letterboxes or cat flaps (judged as easy to defeat, for instance through letterbox cages), and AI-assisted stalking, which, although extremely damaging to individuals, could not operate at scale.
Looking to the future
Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service.
Results from a futures exercise such as this are necessarily speculative and reflect the thoughts of our sandpit participants. Nevertheless, the outcomes provide a useful snapshot of prevailing concerns and how these are expected to play out in the years ahead. It is clear to see that crime prevention and detection strategies must keep pace with an ever evolving technological landscape. An understanding of how new technologies could be exploited for crime is essential for policy actors, law enforcement agencies and technology developers alike.
The above research was funded by the Dawes Centre for Future Crime at UCL. The Centre was established to identify how technological, social or environmental change might create new opportunities for crime and to conduct research to address them.
The report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.