Actuarial automated justice: Predictive policing, pre-crime and automated administration of the law

Ubiquitous surveillance, pre-empting risk, and big data foster another crucial theoretical framework used in the book. Actuarial justice is the process of identifying, classifying, and managing suspect populations according to the level of risk they supposedly pose before they commit a crime or any other wrongdoing (Feeley' and Simon, 1994). The backbone of actuarial justice is the identification of risk and anticipation of action: individuals and groups are thus observed and analysed as risk objects. Once mathematical calculations and algorithms identify risk, agencies deploy a range of strategies that manage risk and ‘prevent’ future crimes by incapacitating prospective offenders. All the elements of the criminal justice system follow this process: police, criminal courts, and probation agencies, as criminal justice organisations in the Global North and the Global South employ some form of ‘algorithmic governance’ (Danaher et al., 2017).

Predictive policing (or ‘crime forecasting’) is the flagship of algorithmic governance and the catchphrases of police practitioners (Wilson, 2018). It gained traction, particularly in the major cities in the United States. As the Chief of the Los Angeles Police Department stated in 2009,

[v]ery soon we will be moving to a Predictive Policing model where, by studying real time crime patterns, we can anticipate where a crime is likely to occur.

(cited in Ferguson, 2012: 261)

Today, the prediction of crime is ‘the new watchword for innovative policing’ (Ferguson, 2017: 1112). Predictive policing has generated quite an attention in the media and public discourse, so much so that the TIME magazine named it one of the 50 best inventions in 2011 (Grossman et al., 2011). The idea behind this ‘smart’ crime forecasting is that by using big data, both crime and not crime-related, we can identify not only probable future crimes and where they are likely to occur, but also prospective offenders and victims (Perry et al., 2013; Wilson, 2018). Identifying future risks via ‘evidence-based’ analytics is widely accepted as being more credible, scientific, and impartial than human-based analogue discretionary practices of relevant professionals (Hannah-Moffat, 2018). As such, policing that rests on ‘traditional’ methods of crime prevention are increasingly replaced with technocratic crime forecasting.

Ferguson (2017) identified three stages in the evolution of predictive policing practices in the US: Predictive Policing 1.0 focused on property crimes; 2.0 aimed to predict place-based violent crimes; and 3.0 concentrated on identifying future offenders. All systems involve computer models, algorithms that use big data to predict areas of future crime locations and/or perpetrators, and in theory, should assist police in distributing resources more effectively. As such, predictive policing is an extension of intelligence-led policing, but with a twist: in predictive policing, we do not identify past crime patterns, but the next crime lo-cation/offender based on the pattern (Ferguson, 2012). Identifying probable future offending underpins the idea that law enforcement can and should act before crime happens, with the help of technology'. Welcome to the era of pre-crime, where agencies of social control aim to disrupt, incapacitate, restrict, and ultimately punish future crime threats

(McCulloch and Wilson, 2015). Introduced in Phillip K. Dick’s short sci-fi story Minority Report and revived in the Hollywood blockbuster starring Tom Cruise, criminology' focused on the concept of pre-crime since the early' 2000s. Anticipator}' logic of crimes that have not happened (and may never happen) and acting on them as they did happen, underpin pre-crime strategies. Post-crime approach, on the other hand, triggers the intervention with the commission of a crime, while risk-based crime prevention focuses on creating conditions in which a commission of a future criminal event is minimised. Precrime’s attention is, as McCulloch and Wilson (2015) point out, on uncertain incalculable threats. As such, pre-crime strategies focus on many possible future scenarios that might never eventuate and penalise people for such ‘behaviour’. After 9/11, Western democracies deployed a range of pre-crime based anti-terrorist interventions (McCulloch and Wilson, 2015; Wilson, 2018). It is the times of uncertainty in which pre-crime narratives thrive.

Data, it is often claimed, do not lie. As such, many practitioners and commentators hailed the system based on ‘crunching’ a large amount of data as a bias-free, objective method that can revolutionise policing in the future (Ferguson, 2017; Thomson, 2018). Yet, as Cathy O’Neill (2016: 3) skilfully' argues,

[t|he math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into software systems that increasingly managed our lives. ... Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and oppressed in our society', while making the rich richer.

As will be discussed in the following chapters, the limitations of big data and algorithms in crime control are significant. There are many critical deficiencies that we have to keep in mind when debating actuarial automated policing and justice: lack of transparency (we do not know where the data comes from and what is included), accuracy and ability to collect ‘all’ data about crime, error and potential bias (‘garbage in-garbage out’ argument concerning what crimes make it to crime statistics, for example), and unforeseeable social changes that impact on crime and offending come to mind. As Kitchin (2014), Chan and Bennett Moses (2016), and Broad (2018) note, even in the era of big data systems, data is simply a sample—it cannot capture all inputs. As such, big data occasionally has the effect of making-up data (Beer, 2016) and decisions based on it are likely flawed. Digital frontier technologies bring new challenges when it comes to predictive policing and algorithmic governance, especially given the development of Al. We are beholding a new development that aims to create ‘master algorithms’ able to process, learn, and adapt to decision-making that will not require (or tolerate) human input or control (Danaher et al., 2017). Devices and machines will be critical tools in actuarial justice-based pre-crime approach, in which the aim is to identify future offending and offenders via DFTs and disrupt them without human interference. Soon, algorithms are likely to be arresting, prosecuting, and sentencing people for future crimes they have not yet and may never commit. Critically, the system might be unfair, as algorithms do not have fairness and equality embedded in them.

The notion of ‘objectivity’ of data and algorithms underpin automated justice performed by criminal courts. Decisions on sentencing, parole, and bail delivered by humans, based on their expertise and experience, are now increasingly delegated to code. Such decisions are difficult to challenge because of the lack of transparency associated with big data and algorithms, and our lack of understanding of the process itself. Once humans are removed from this process, which is a possibility, protection of human rights and civil liberties will entirely be dependent on smart things. Yet, every algorithmic-based decision-making process has at the minimum two concerns identified above: efficiency and fairness (Danaher et al., 2017; see also Broad, 2018). Issues such as difficulties in predicting human behaviour, inaccuracy and flaws in big data, lack of transparency, arbitrariness, and lack of comprehension that things could exhibit in decision-making are harmfill when an application for credit card gets wrongly rejected. They are profound if one gets arrested, prosecuted, convicted, or imprisoned because of an automated injustice.

Using more theories is always a good thing, right?

In addition to the above broader social and criminological theories, a range of specialised theoretical concepts will be used to unpack the complexities of DFTs. In Chapter 4, for example, I use the concept of the technological unconscious (Thrift, 2004; Beer, 2009; Wood, 2016) to explain the development and advancements of the loT. The concept of the human-data assemblage (see Lupton, 2014) will be used to explore the expansion and impact of human to non-human and non-human to non-human relationships in Al and the loT networks. In Chapter 3, when debating Al, Peter Asaro’s concept of Model of Care (as opposed to Model of Threat) approach will be adopted in developing and applying Al. These and other theoretical models will add much-needed nuance in dissecting and forecasting the social in the future Internet.

 
Source
< Prev   CONTENTS   Source   Next >