AI detectives: Actuarial justice on steroids

Artificial intelligence has been pinpointed as a game-changer when it comes to solving crimes and apprehending offenders. Innovations such as VALCRI are an example of this narrative. VALCRI or Visual Analytics for Sense-Making in Criminal Intelligence Analysis is the AI-based system that scans and processes police records, pictures, interviews, and other data to identify patterns missed by human analysts and solve unresolved and historical crimes (Revell, 2017; Baraniuk, 2019). Processing a vast number of images, extracting information from it, and making conclusions and links is predicted to revolutionise detectives’ work in the future. It could, as some commentators claim, entirely eliminate the need for human detectives (Revell, 2017). While systems such as VALCRI are expensive and are currently available mainly to law enforcement in the Global North, other Al-based technologies are becoming mainstream in crime control. A type of АІ-based technology widely used by law enforcement is facial recognition software. This technology is particularly useful in identifying victims of child sexual abuse, in which the trauma of going through hundreds or thousands of pictures of abuse is replaced by devices and machines that swiftly identify, classify, and process victims and potential victims. Facial recognition has also been used to identify and ‘rescue’ victims of sex trafficking (Baraniuk, 2019). Al and machine learning systems also assist in identifying offenders when their identity is unknown (for example, when offender’s face is obscured by a helmet, or there is a partial or blurry photo of a face). Discovering and preventing online fraud is another area where Al systems have been delivering impressive results. These initiatives are especially pervasive in the United States (Rigano, 2019), but can they be trusted? Can interested parties challenge the decisions made by Al systems in court? Some critics warn that this might not be possible, given the proprietary rights of the technology is with businesses that develop and sell it to government agencies (O’Neill, 2017; Baraniuk, 2019).

Nevertheless, in the smart cities of the future, the use of Al-based systems in crime control is likely to grow. As weak Al slowly transitions to more robust Al systems, humans will increasingly rely on code to do the ‘legwork’ in criminal investigations. Data these systems access will not be what we ‘feed’ them willingly, to train learning algorithms. Al systems will likely have access to our social media (or the future equivalent of Facebook, Twitter, or Instagram), mobile phones and computers, sensors embedded in the loT, data about a crime scene or a potential suspect from government agencies and businesses, and more. Finally, criminal trials will increasingly rely on algorithms (Bennett Moses and Chan, 2014). The driver behind such development is a promise of eradicating racial, gendered, or other bias from courtrooms (see Tegmark, 2017). In fact, smart algorithms are already in use in pretrial, parole, and sentencing decisions especially in the United States. So-called ‘recidivism models’ are deployed in over 20 state jurisdictions, with code assisting judges in assessing danger posed by offenders (O’Neill, 2016). While race is not one of the factors considered in such systems, other issues correlated with race are, such as the criminal history of family members, prior associations with the police, and the like (Broad, 2018). Machine learning algorithms of the future might—after processing a vast amount of data from personal information, financial and travel history, affiliations, prior convictions, sentences, and the list goes on—make decisions on bail, parole applications, and sentencing without human input. This is the second blackboxing moment identified in this chapter. Without transparency and understanding of the process in which things decide to refuse or approve bail, and without an opportunity to question such decisions, we might face a devastating blow for basic human rights and civil liberties. Smart systems may, yet again, turn to be mediators, and the output of their actions may not be aligned with our goal of a fair, just, and transparent criminal justice system.

 
Source
< Prev   CONTENTS   Source   Next >