Scanning and scenario writing: Artificial intelligence, law, offending and crime control

The ultimate criminal? When smart things do the wrong thing

Machine learning algorithms play an essential role in the social fabric for decades. Facial recognition software, virtual personal assistants, product suggestions, car autopilots, search and movie recommendations are some prominent examples. There is an almost universal agreement among scholars and practitioners that legal systems are trailing behind when it comes to Al development. Within a broader ‘crimes with Al, crimes against Al, and crimes by АГ framework (Hayward and Maas, 2020), the first question that needs unpacking is whether AI-powered devices and machines should be subject to criminal law. One might argue this question is more appropriate for science fiction than academia, or comparable to debates vis-à-vis medieval trials of dogs for criminal conduct (Hildebrandt, 2008). However, the matter is both complex and relevant. In the early 1980s, a robot in a Japanese motorcycle factory pushed a worker into an adjacent operating machine after identifying the human as a threat to its mission (Hallevy, 2010). While this was not the first case of a death caused by a smart machine and while such incidents are extremely rare (Tegmark, 2017), the case sparked a debate in academia about Al’s potential in assisting/committing a crime (Hayward and Maas, 2020; King et al., 2020). From malware, phishing attacks, identity thefts, blackmail, frauds, market manipulations, and ‘deep fakes’ to assaults and homicide, Al systems’ potential in enabling and carrying out criminal acts is undisputable. As I explore later in the book, Al can also be hacked/manipulated and turned to malicious use.

Recently, autonomous weapons with Al technology have been singled out as one of the great dangers for humankind (Slijper, 2019; this question will be explored in detail in Chapter 5). But when Al offends because of malfunctioning software or because its goals were misaligned with ours, who is responsible? The issue of criminal liability leads to the debate around Al ethics and whether technological artefacts—devices and machines—need to be considered ethical agents (for a comprehensive overview of this issue, see Verbeek, 2011). Currently, Al systems are not subject to criminal law and have no legal rights or duties. Smart things, it is commonly argued, do not have mind, consciousness, free will or intentionality and cannot be held responsible for their actions (Verbeek, 2011). The question I begin to unpack below is whether we need to re-think this approach given the advances in, and impact of technology, and whether we should consider smart objects as moral agents in themselves—entities that can perform actions, for good or evil. This issue, it seems, is particularly important given their increased autonomy within growing thing-to-thing and thing-to-human networks. Floridi and Sanders’ (2004) influential theory of the moral agency of intelligent technologies and ANT constitute a broad framework for the upcoming analysis. They suggest that technological artefacts need to have interactivity, autonomy, and adaptability to become moral agents. Artificial intelligence and machine learning systems tick all the above boxes. They are actants, as humans are no longer sole actors in the social setting. Non-humans (such as DFTs) and their users/consumers create hybrids that act as mediators, as defined inputs do not translate into defined outputs; ergo, crime occurs, despite carefully crafted code by software developers and a correct usage by the consumers. As such, it is time to think about criminal liability that is not (or not exclusively) linked to their human companions.

In common law, criminal liability has two key elements: actus reus—a factual, external element, criminal conduct, and mens rea—an internal, mental element, knowledge, or general intent towards actus reus. The guiding principle is actus reus non facit reum nisi mens sit rea (‘the act does not make one guilty unless the mind is also guilty’). Mens rea can be the knowledge about actus reus, or negligence (about something a reasonable person should know). If one of these elements is missing, no criminal liability can be established for many offences in common law systems. Actus reus is contentious when it comes to Al agents, but it is plausible. Mens rea, however, is mostly considered in conjunction with a human who possesses mens rea, while Al commits actus reus (King et al., 2020). It is because of mens rea that some authors suggest we should seek to establish criminal liability elsewhere, not with the ‘thinking machines’ themselves. For example, Goodman (2016) suggests we need to investigate computer coders, as they should be responsible if Al systems break the law. The code, he argues, is the brain of smart things but is ultimately the product of human design. Hallevy (2010), on the other hand, outlines three models of Al criminal liability: the perpetration-via-another, the natural-probable-consequence, and the direct liability model. The first two models are dependency models, linked to humans, in which Al is not an independent entity'. In the perpetrator-via-another, Al systems have capabilities that might be comparable to the capabilities of a child, or a person who is mentally incapacitated. This is the liability system we have today for child soldiers, who also do not have mens rea (Perry and Roda, 2016). As such, legally, they are innocent agents, and criminal liability is established elsewhere. In the case of a child who commits a crime as instructed by the parent or army commanders, they are the ones who will be held liable. If Al commits a crime, it will be software developers or end-users, as mens rea is established in a person who developed or operated the system. This model is human-centric and is not suitable in cases where Al was not designed to commit a specific offence or when a crime is committed based on code that Al develops by itself. The natural-probable-consequence liability model does not look for mens rea but at coders or users’ ability to foresee the potential commission of offences as a natural and probable consequence of Al systems’ actions.

In the direct liability model, Al systems are the ones legally sanctioned. Hallevy (2010: 187) argues that attributing actus reus to Al devices or machines is relatively easy, but the internal element is ‘the real legal challenge in most cases’. To establish knowledge, intent, or negligence of Al systems is complicated but conceivable, and Al could be criminally liable regardless of liability of humans. As Kaplan (2016: 106) suggests, ‘|t|here is no reason you can’t write a program that knows what it is doing, knows it is illegal (and presumably therefore unethical), and can make a choice as to what actions to take’. This concern is called emergence, and it refers to the process of Al agents acting beyond the ways originally intended by developers, or, to use ANT terminology—become actants/mediators. Computer coders can and often do ‘encourage’ specific scenarios undertaken by the Al; nevertheless, Al systems can and do take a particular pathway autonomously, without human interference (see King et al., 2020). They undoubtedly have interactivity, autonomy, and adaptability, and use these skills for either morally good or evil. Thus, they are within ‘aresponsible’ or ‘mind-less’ morality, within which intention or guilty mind is not necessary for accountability (Floridi and Sanders, 2004). If devices and machines learn independently from humans and develop/alter code, is it therefore prudent to look at them vis-a-vis criminal liability? After we address this question, the next quest to ponder is an appropriate punishment for Al (Hildebrandt, 2008; for some ideas, see Hallevy, 2010) and what rights, if any, should extend to Al systems (Gunkel, 2020). My intention here is not to dig deep into the legal philosophy and theory; I am simply flagging concerns legal scholars and criminologists already face. These questions will be of utmost importance given the development of emerging digital technologies in the smart cities of the future.

One of the objectives for Al development at Google is that the technology should be socially beneficial and ought to avoid creating or reinforcing unfair bias (Pichai, 2018). Therefore, as its leading developers acknowledge, Al systems could and indeed often do reproduce and bolster unfair biases and discrimination (Yampolskiy, 2019; Broad, 2018). One of the most notorious examples is Tay, an ill-fated Microsoft chatbot. Developed to mimic the language patterns of young Americans and programmed to learn from interactions with punters on Twitter, Tay turned into an aggressive racist, Nazi fan and a bigot within hours of being operational. Microsoft shut down the experiment 16 hours after the launch (Chase, 2018; Bunz and Meikle, 2018).

Al’s ‘neutrality’ has long been debated in popular culture and academia. Harari (2018: 60) suggest that if we program the software so that it ignores race, gender, and class, ‘computer will indeed ignore these factors because computers don’t have a subconscious’. Goodman (2016), Broad (2018) and O’Neill (2016), however, claim that human bias saturates existing algorithms. As Cathy O’Neill famously declared, ‘algorithms are opinions embedded in code’ (O’Neill, 2017: 1:40). They, she continues, need two key things to be built: past data and the definition of success. Sexism, racism, xenophobia, and other forms of discrimination do not exist in a bubble and are embedded in past data. Removing embedded bias that exists in large annotated datasets Al systems use for training is difficult. Virginia Eubanks in Automating Inequality powerfully demonstrates how algorithms reinforce (if not further produce) inequality in three critical areas of public sendees in the United States: homelessness, welfare provision, and child protection sendees (Eubanks, 2017). It is also hard to prevent programmers and engineers from inserting their own subconscious biases into code. Indeed, as the next segment indicates, the notion that Al can further inequalities has had significant consequences when it comes to crime control in the Global North.

As suggested in Chapter 2, actuarial justice has achieved its renaissance in the twenty-first century. To predict offending and recidivism, government agencies rely on Al and big data, especially in the United States and to a lesser extent, the United Kingdom and continental Europe (see Gerritsen, 2020). Applying algorithms and machine learning systems to large datasets to forecast where crimes are likely to occur and who might commit them has been at the forefront of what some commentators have called ‘digital policing revolution’ (EMPAC, 2017). The modelling usually ties future offending to places (Predictive Policing 1.0 and 2.0) and people (Predictive Policing 3.0) based on age, criminal record and history, employment, and their social affiliations. Thus, crime and environmental data assist in predicting where and when police officers should patrol the streets to deter or detect crime (Shapiro, 2017). But just what data do they use and what is the definition of success?

One of the well-known examples of machine learning ‘crime risk forecasting’ systems was HunchLab3. The web-based system analysed records of historical crime data in certain areas as well as current crime reports, emergency calls made by the public, geographical features (such as train stations, pubs, and bridges), environmental factors (lighting), social data (major events and gatherings), time of the year, day of the week, weather reports and the like (Chammah and Hansen, 2016; Joh, 2017b; Cheetam, 2019). Based on a vast amount of data—some of which embeds existing bias in crime reporting and processing—HunchLab signalled areas where potential crime might happen. Importantly, while HunchLab did use non-crime related data, it did not use data about people, nor was it focused on predicting people’s actions. The developers also intentionally excluded data about prior arrests and convictions, social media, and other personal data, and drew on multiple, independent sources (not just law enforcement data) in an attempt to avoid bias (Cheetam, 2019). Founders of HunchLab sold the business in 2018 after they realised the product might lead to over-policing of certain social groups, civil rights violations, and abuse of power (although they cited business reasons as the key driver for sale; see Cheetam, 2019)4.

Many contemporary predictive policing and offending interventions have been criticised as racist (Shapiro, 2017; Joh, 2017a; Angwin et al., 2016; Broad, 2018; Zavrsnik, 2018; Gerritsen, 2020) or otherwise discriminatory (Ferguson, 2012). The literature and research suggest that predictive policing strategies disproportionately target minority neighbourhoods (Chammah and Hansen, 2016; Shapiro, 2017). Studies have found that black people were 77% more likely to be predicted to perpetrate a future violent crime, and 45% more likely to be predicted to commit any crime than non-black populations. At the same time, Al systems fail to predict crime: only 20% of people the system predicted to commit a violent crime did so (Cush, 2016; Angwin et al., 2016). Crime forecasting strategies underpinned by racially biased policing data integrate these biases into the analyses, and police who find criminally suspicious behaviour based on these predictions reinforce biases. If patrol cars sent to prevent burglary find a couple of youth swinging from a bottle and acting suspiciously, they will arrest them, creating ‘pernicious feedback loop’ (O’Neill, 2016): they create data to justify more policing.

In the era of the exponential growth of computing abilities and Moore’s Law5 that predict that the processing power of computers doubles every two years (Chase, 2018), we will see the dramatic changes in predictive policing and automated justice that could have harmful consequences for many. With the development of Al, we might soon witness the emergence of Predictive Policing 4.0, based on machinelearning pre-emption. The logic here will not be to identify where future crimes might happen or even who is going to perpetrate them, but to use algorithms to forestall offending altogether. Strategies to prevent a likely event (however loosely defined) have been replaced by addressing uncertain and incalculable threats. With the development ofDFTs, we are a step closer to ‘substantive coercive state interventions targeted at non-imminent crimes’ (McCulloch and Wilson, 2015: 5). Artificial intelligence systems, it is argued, could do this by removing humans from the process of decision-making. The pace of Al expansion and systems’ learning abilities is so fast that, if we remain passive, we risk that tomorrow’s technology might challenge us in a way we cannot foresee. Perhaps not to the extent of an end-of-life-on-Earth scenario, but in a way where key postulates of our legal and social order could irrevocably shift. This is not to suggest that technological singularity or strong Al will become a reality. However, we cannot ignore the pace of the development in Al and machine learning, and need to be prepared (as much as possible) for potential consequences this expansion might bring for years to come. Smart algorithms might become super-efficient enforcement agents that promptly detect and punish every violation of the law, with an intent to do so (we see rudiments of this approach in China’s social credit policy, in which jaywalking or crossing on red light results in losing points and certain rights; see Kobie, 2019; Chase, 2018). They will be on 24/7 and will not ignore our (actual or future) violations. There will be no need for police, courts, judges, or appeal. As I explore later in the book, systems’ decision will be final, irrevocable, and penalty will occur regardless of who the transgressors are and whether they are likely to commit the crime in the future. We are apt to believe in the fairness of such decisions, as we falsely trust the notion that things are impartial and have no bias. But, given that we are not quite there yet, what is the point in worrying about it? Even if we reach this stage at some point, many argue the above scenario might not be so bad after all. Uncertainty and threats are everywhere around us, and given that humans are deeply flawed and not capable of rising to the task, why not let the smart things do the job for us? I come to this pivotal point in the final segment of this chapter.

Other scenarios, however, are more plausible, at least in the immediate future. It is likely, for example, that Al-based predictive policing will incorporate data that HtttidiLab refused to embed in its system—data about people, including records and inputs collected and provided via fitness trackers or loT devices—to predict future crimes and identify offenders. In the era of ever-growing big data and underpinned by continuous surveillance and monitoring of potential suspects, algorithms will be assessing and determining their (and our) likely behaviour, which might lead to further pre-crime interventions. In the Minority Report, the mutated humans called precogs previsualise future offences, prompting law enforcement to react before crimes occur. In the Al-based future, the focus will be on offenders, but not exclusively those who have previously offended or people affiliated with other lawbreakers and gangs. Algorithms will calculate the likelihood to offend, based on a range of big data about you, such as your habits, associations, and overall digital footprint. As pre-crime approach is largely speculative, the use of technology will make it appear evidence-based. In other words, the process will provide the illusion of scientific neutrality (McCulloch and Wilson, 2015). If Al is likely to reinforce the stereotypes already entrenched in our society, this new pre-crime approach will fundamentally change the way we engage with crime and offending, and not in a good way. Critically, this development is likely to be an ultimate blackboxing moment Bruno Latour (1999: 304) and others warned about a while ago. The more technology succeeds, the more obscure and opaque it will become; we will be aware of inputs and outputs but not the internal complexity of the artefacts and smart things.

 
Source
< Prev   CONTENTS   Source   Next >