From pre-crime to Model Of Care: Theorising AI and crime

The development of Al is likely to result in a fundamental redesign of our legal system and criminal justice responses. Even if we abstain from the debate on the likelihood of AGI and consciousness of self-learning systems (and questions such as if a conscious machine serves us presumably without pay would this constitute a situation akin to slavery - see Tegmark, 2017; Schneider, 2019; Gunkel, 2020), dramatic changes might be just around the corner. This emerging digital technology' will undoubtedly modify everyday life, to the extent that we struggle to envision right now. The hybrid Al-human systems we enjoy today and the ones we might have in the future are likely mediators that could lead to many (un)wanted crime-related consequences, some of which are outlined in this chapter. Importantly, our understanding of the technology—how it works, how it comes to certain decisions, or how to challenge it—is expected to decrease.

Surveillance will not be the only basis for the likely supremacy of the Al systems. The argument of ‘life improvement’ will continue to drive the exponential growth of this technology. The question is: what happens when the algorithms do not improve our lives? Should we ‘punish’ Al systems for offences they commit? Should machine learning systems be legal and moral agents, and as such, be held criminally liable when actus reus is present? What happens as technology' develops, and we come closer to technological singularity? Do we need to start thinking outside of the box and reconsider the fundamental principles of criminal justice and criminal law now, before we have the problem at hand?

As suggested earlier, pre-crime represents one of the fundamental shifts in contemporary' engagement with offending and crime control that aims to identify future threats, ‘predict’ crime, and strike before a crime occurs. This strategy is not underpinned by' reducing opportunities or the means for offending, which is at the very core of crime prevention interventions. Crime prevention implies the curtailment of a future, likely event. Pre-crime is focused on many possible and projected futures that may' or may not happen. It is imagining, designing, planning future crimes, or simply' being pinpointed as the future perpetrator that triggers the intervention. The objective is to disrupt, incapacitate, punish those singled out as future offenders via a variety' of technology and non-technology-based interventions. With advances in Al, it is easy' to imagine that pre-crime might become commonplace where algorithms provide ‘evidence’ for uncertain incalculable threats. In the context where many threats are deemed as imminent, the argument that it would be for the best if smart things take the reins flourishes. While we do not always have faith in impartiality and objectivity' of human decisions, we ought to trust devices and machines even less. This scepticism is necessary' because such developments might lead to pre-crime interventions and because of the haphazard nature of current technological progress. Importantly, these processes challenge the notion of criminal liability, but in a vastly different way compared to the one discussed above. In cases ofpre-crime-based criminal justice, there is no actus reus at all, all the while smart algorithms, on the surface so scientific and bullet-proof, rest on fragmentary and biased big data.

Therefore, it is essential to reconsider the philosophy that drives Al advances. Here, I draw on the works of Peter Asaro, one of the leading philosophers of science, technology, and media. Asaro (2019) suggests that in discussing Al, we need a new approach. Things around us are already ‘bursting with morality’6, and ethics is no longer only a human affair (Verbeek, 2011: 2). Hybrid systems of today can perform actions, for good and evil. Yet, the approach we see in Al development is mostly based on human ethics. The Model of Threat approach is widely used in developing contemporary machine learning and Al systems, including the ones used in crime control. The assumption that everything falls into two categories—threats and non-threats—underpins the above approach. This narrative applies to the majority of contemporary responses to crime and offending. Al systems, thus, are tasked to identify threats and develop strategies to eliminate or reduce crime-related threats. The idea is that through machine learning and large datasets, sophisticated Al will be more accurate, precise, and comprehensive in evaluating the risk of (re)otfending of observed populations. Most of the systems used today and discussed in this chapter are examples of the Model of Threat Al systems.

A contrario, the Model of Care approach would see the development of Al actors based on values and goals that should benefit everyone in the system (human and non-human), those who use the system, and the society. This method is largely non-binary. It is based on the understanding that social relations and contexts are not linear; instead, they are incredibly complex, and more and better data do not necessarily translate into solutions for existing social and institutional problems. Nonetheless, better and bigger data might provide opportunities to find solutions for the problem at hand and improve existing policy frameworks. Rather than looking to identify and predict future violations of the law, including future offenders, Asaro argues that Al systems should help us understand why people offend and focus on crime prevention, rather than pre-crime. By identifying young people at the risk of offending and providing them with jobs, for example, researchers using not-so-state-of-the-art technology in the city of Chicago managed to reduce violence-related arrests among youth that participated in the program by 51% (Asaro, 2019: 48). Suppose big data and machine learning is used to identify and reduce specific crime-inducing parameters, such as unemployment, poverty, social exclusion, and lack of education. In that case, some issues pertinent to offending might likely be addressed more effectively in the long run. Software engineers and developers cannot do this on their own. Artificial intelligence and machine learning need input from social sciences and humanities. It is essential to do this now, as we might not have the tools to deal with the consequences of nonengagement in the future. Another key question is not whether we can (or whether we should) make forecasts about crime and offending at all; big data and machine learning should be able to point us in the right direction, probably with more accuracy as the technology develops. The question that matters is what actions we take, based on these forecasts.

The potential commercialisation of combating future crime and offending was intentionally not included in this chapter. However, one must flag that companies that develop Al technology and automated justice tools are making large sums of money out of the business (see Wilson, 2018). There is also little transparency when it comes to an understanding of the development and deployment of such tools (Angwin et al., 2016; O’Neill, 2017). The consequences of the privatisation of prison systems in the Global North and the rise of the prison-industrial complex must be a learning curve. The fact that private companies sell products to government agencies (products that potentially have a vast impact on people’s lives) with limited, if any, oversight from academia and civil society is a worrying development indeed. As such, this issue warrants immediate and comprehensive scrutiny.

 
Source
< Prev   CONTENTS   Source   Next >