Crime and punishment in the future Internet: State of the affairs and possible trajectories

To regain some control over the development of technology' and its impact, and propose legislative, policy, and technological interventions to follow such processes, we must analyse and theorise DFTs. We ought to start with available tools and frameworks and build new ones as we go. In Crime and Punishment in the Future Internet, I began by applying a range of relevant theories from various disciplines to DFTs. Of course, the list is not definitive. I invite you to leave feedback and comments on the book’s companion website ( with ideas and suggestions to enrich these and further theoretical inquiries on the topic. In the previous chapters, ANT was utilised to identify the human and non-human source of action in the future Internet. Humans and our companion species were supposed to mutually control our social interactions. Yet, we see instances of blackboxing in everyday applications of digital frontier technologies. Non-human mediators that do not translate input into a defined output but have an element of surprise and uncertainty are gaining traction and altering social fabric and relationships. Their growing cooperation, aimed at reaching mutual or individual goals, might lead to a scenario where goals of smart things might not be aligned with ours. Digital life of our companion species might increasingly be out of our control and oversight, all the while our existence is being stripped naked, under the watchful eye of our little helpers, mobile robots, and smart algorithms. We will all be targets, but many groups and individuals will feel the impact of this process disproportionately.

A potential impact of DFTs on offending, victimisation, crime prevention, and penal policies and practices is difficult to predict but should not be dismissed. In this first examination of emerging technologies and their complex relationships with criminology, conclusions should not and cannot be drawn. As it was demonstrated in previous chapters, Al, the loT, autonomous mobile robots, and blockchain have already begun to fundamentally alter how crimes are committed, solved, administered, and punished. One of the questions posed in this book is should Al be subject to criminal law and criminally liable, or should our focus be on humans—coders and users. This dilemma should be read and analysed within a broader logic of Catch-22: shall we adopt the human-centric or post-human approach when it comes to technology'? Given the standpoint adopted in this book was that the thing-human alliance is critical for understanding crime and offending in the future Internet, it was suggested that we ought to think outside the box and consider direct liability in some instances. If Al and Al-powered mobile robots and the loT systems act as mediators and create ‘emergence’, in which Al acts beyond originally intended ways, wouldn’t this approach be the most appropriate one? This is a topic for future criminological research, in partnerships with legal scholars and experts. The book also raises many other matters pertinent to offending with the assistance of, or performed by, DFTs. Of course, as it was to be expected in a book that is the first examination of an issue, I could not offer more than a brief overview of pressing contemporary issues, those identified as such by experts and commentators (and by me, using scanning and scenario writing methods). Other applications of emerging technologies in offending are likely, and as such, need to be investigated in the future. One such example is the use of Al and the loT in stalking and family violence.

Digital frontier technologies have been changing government agencies’ efforts to prevent (or rather foresee) criminal behaviour, assist in criminal proceedings, and overall change our engagement with issues around policing and victimisation. Many such examples have been outlined in this volume: from HunchLab, VALCRI, facial recognition machine learning applications, detection, and investigation of traffic offences and ‘remote control’ policing, investigation of online fraud, to child sex offences, human trafficking, and modern slavery. The potential of blockchain to ‘map’ offending presented in Chapter 6 might look a bit far-fetched. However, such scenarios require further exploration. Technological artefacts could, arguably, abolish certain types of criminality, as well as the need for discretionary policing practices. One illustration offered in the book is traffic offences and autonomous cars. This development could be particularly important for groups disproportionately targeted by police, such as racial and ethnic minorities. The technology' could also address some misuses of other DFTs, such as the use of blockchain in stopping cyber and DDoS attacks in smart cities of the future. Emerging technologies could also save lives and prevent suffering, as witnessed in the debate on humanitarian drones. They could assist us in abandoning ‘rescue mentality’ and return leverage and agency to those impacted by globalisation-induced crimes and injustices.

Policing in the future Internet is likely to look somewhat different, with more actors involved in the security industry (as demonstrated in the case of mobile robots such as K5, or via distributed ledger technology). Whether or not these technological advancements are going to eliminate the need for over-policing, and how the police are going to respond to challenges prompted by its use (such as, for example, deployment of drones for public disturbances and offending, or autonomous swarm drones patrolling ‘roborder’ that could tap into portable ‘smart things’ to determine whether you are legitimate border crosser or not as debated in Chapter 5), is anyone’s guess. Are we going to witness a new dawn of police powers? Is law enforcement going to be delegated to other actors and consumers? We do not have answers to such queries right now. The change is ongoing and fast, and we need to monitor, research, analyse, predict, and influence its development as much as possible. Courtrooms will change as well. Ambient intelligence has been increasingly used in criminal proceedings where our little helpers routinely testify against us. In the smart cities of the future, we are likely to see more technological innovations employed for this very purpose, and we should not outright dismiss such advances.

At the same time, new types of pervasive and hard-to-opt-out-of surveillance at the border and beyond have been deployed. The goal of such interventions is to obtain information about who we are and where we are at all times. ‘Dronisation’ of borders is one example outlined in this volume that illustrates this process. Keeping humans deemed risky and dangerous (illegalised border crossers, potential offenders, and recidivists) at arm’s length is the pinnacle of such interventions. Privacy violations, algorithmic governance, self-imposed regulatory behaviours are mechanisms to do just that. Data, it is argued, does not lie; it is objective and accurate. We need to learn to unpack the truth and foresee future offending as predicted by smart algorithms. Actuarial justice as the process of identifying and managing people according to risk before they commit a crime has found a new ground in Al. Pre-crime narratives call for increasingly earlier interventions not by creating fewer opportunities for offending, but by ‘predicting’—using machine learning algorithms—the crime scene, and prospective wrongdoers. Emerging technologies move us further away from analysing past patterns of crime and deviance to predicting the future. Threats and risks (of offending, deviant behaviour, violations of border and migration regimes, and the like) are deemed as calculable, foreseeable, and inevitable even though they are none of these things. Predictive policing based on crime and non-crime related data that is likely to include information from social media platforms or ambient intelligence is hailed as scientific, credible, and revolutionary. Identifying places and people linked to future crimes is not in the realm of science fiction anymore. Fuelled by biased information fed into these systems, non-transparent, non-accurate, and highly lucrative crime-predicting applications not only make up data about crime and offending; they increasingly remove humans from such processes. PredPol 4.0 is likely to be harmful to many, and we cannot underestimate its potential detrimental effects. Machine learning underpinned smart things will ‘disrupt’ crime and ‘pre-empt’ offending by arresting, prosecuting, adjudicating, and sentencing people for crimes they may never commit. They could turn into superdetectives and judges that punish intent, or simply ‘wrong’ associations—being in the wrong place at the wrong time, of a certain race, class, or else. They will be heralded as objective, and decisions things make will be final, irrevocable, and almost impossible to contest. We are unlikely to understand how algorithms come to such verdicts but will trust their objectivity. This bladeboxing potential of DFTs is one of the key concerns raised in this volume. The more successful smart things become, the opaquer they will be for consumers and experts. With apparent consequences that impede on human rights and civil liberties, removal of human agency accompanied by a lack of safeguards that will ensure a fair and transparent criminal justice process is the path we must avoid.

So, how should we regulate or reverse these developments? Some suggestions were made in this volume, such as the adoption of the Model of Care (Asaro, 2019) approach in Al and the implementation of Asimov’s Three Laws of Robotics or an ‘updated’ version of rules that would acknowledge intricacies of human-thing assemblages. Rather than working on technology that aims to identify and prevent risks and threats, we should aspire to develop technology that attains values and goals that would benefit everyone. This process is not merely revaluing qualitative vs. quantitative data. Artificial intelligence and other emerging technologies should be built to learn, adopt, and retain our goals. Issues around crime, victimisation, crime prevention, recidivism, and penal policies are so complex and nonlinear; as such, they cannot be solved by the binary approach of risk/threat vs. non-risk/non-threat. Using some digital frontier technologies can address limitations of others: blockchain’s potential in addressing concerns around privacy and surveillance brought by the development of Al and the loT, but also serious crimes such as labour exploitation, human trafficking, and modern slavery require close inspection. Finally, more data about the social does not automatically translate into better solutions; yet, it can certainly help in finding such solutions. It is what we do with data that matters. Al, autonomous mobile robots, the loT, and blockchain have the potential to assist us in understanding why people offend, how we can implement better crime prevention strategies, and to better address underlying contexts that enable offending such as poverty, unemployment, discrimination, homelessness, lack of education, and the like. This approach is expected to yield more tangible and long-lasting results than the ones that focus on using technology to ‘predict’ what humans (or machines, for that matter) might do in the future.

< Prev   CONTENTS   Source   Next >