Where to go from here?

While acknowledging the standpoint of Luddites, sceptics, and techno-utopians in this book, I adopted the stance of beneficial Al that looks at how to harvest existing and forthcoming technology for good. I am not suggesting that other philosophical frameworks are less relevant or erroneous. A contrario, such perspectives can be, and often are both comprehensible and theoretically and morally sound. I simply believe this position enables us to constructively engage with what is known as a ‘control problem’ and the impact of unsupervised agency of technological artefacts. One example of this is the process of reproducing and reinforcing bias via technology', a topic explored at length in this volume. The sections on automated profiling and crime forecasting (Chapter 3), as well as policing in the era of smart mobile machines (Chapter 5), detail many conundrums pertinent to ‘neutrality’ of technology. I also looked at how some of the concerns regarding the renaissance and shortcomings of actuarial justice could be addressed in the future Internet. Thus, the question we should focus on is whether we could develop technology that is likely to eradicate, rather than reinforce, bias. This question is critical as ‘[t]he challenges of the future are rarely solved with the technologies of today’ (Tvede, 2020: Section Our knowledge at double speed, subsection 4). Addressing censorship, ubiquitous surveillance, and human rights violations, while at the same time facilitating free speech, democratic practices, transparency, accountability, and the rule of law should not only be the task for humans. We need to find creative, inventive ways to reconcile our values with technology, and harness technology to help us build a better world. The crucial thing, however, is to cease to be passive observers. Whether early adopters or experts, we need to be prepared, as much as possible, for what is to come. In doing so, we need to bypass boundaries: disciplinary, physical, and national. Installing fundamental values in DFTs and deploying precautions requires action and creative and innovative thinking, as well as novel frameworks. The claim that big data and emerging technologies might signal the end of theorising crime and offending is fundamentally flawed and can lead to more blackboxing (for more on this, see Chan and Bennett Moses, 2016).

Foresight approach, as well as other methodological tools (such as backcasting that defines a desirable future and identifies steps and policies we need to put in place to get there), enable meaningful engagement with technology in and across a range of disciplines. By looking at the current state of technological expansion and envisioning multiple pictures of the future—however implausible and unlikely they currently appear—we could potentially mitigate or eliminate some negative consequences of the Fourth Technological Revolution. This intervention is critical for everyone: developers, early adopters, consumers, business people, politicians, academics, community workers, health professionals, policy makers, residents of smart cities, and border crossers.

The following areas of urgent concern for criminologists and social science researchers, in conjunction with our STEM colleagues, are (in no particular order):

• Addressing techno-fog/blackboxing and ensuring transparency

Digital frontier technologies are yet another example of progress engulfed by lack of transparency and understanding of how technology works. While we continue to use increasingly complex AI-powered smart devices and autonomous mobile robots for a range of daily chores and purposes, and as we begin to implant such technology into our bodies, we seem to know less about how things operate, communicate, make decisions, and create and achieve goals. Just pause for a second and reflect on your knowledge of your surroundings, technology you use at home, workplace, or in a car, or reflect on some examples used in this book. How much do you know about how smart home devices, or semi-autonomous cars, work? As Carl Sagan (cited in Goodman, 2016: 466) warns, ‘[w]e might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces’. This ignorance is particularly dangerous given that we live in times where ‘[n]ever have so many people had so much access to so much knowledge and yet have been so resistant to learning anything’ (Nichols, 2017: 2).

Certainly, we appreciate that technology can assist us in a range of daily tasks and can make our everyday life and experiences more pleasant and enjoyable. Our health has also been improved by technology'. Nevertheless, we need a greater understanding of both current contexts and future trajectories and possible scenarios. To be clear, I am not referring only to end-users here; experts will soon struggle to comprehend the details of technology too. Advances in Al are of concern; as indicated at the beginning of this book, significantly more effort and money have been put in advancing technology, rather than understanding and addressing its shortcomings and pursuing ‘explainable АГ that would provide answers to how systems make decisions and why. This lack of focus is particularly worrying given that smart things and code have been—and will increasingly be—making decisions about our rights and liberties. Decisions on punishment, sentence, bail, parole, and chances of recidivism are likely to be even more non-transparent, beyond question, oversight, and scrutiny. This is both dangerous and unacceptable for citizens and non-citizens in the Global North and the Global South.

Imposing bans on features of technology in order to prevent misuse and offending (as suggested in addressing shortcomings of blockchain in Chapter 6), or banning technology altogether, is another cul-de-sac. I am aware that many experts and readers will disagree, for example, when it comes to the use of facial recognition and Al in policing. My main argument here is that bans are unlikely to yield the results we desire. What we need instead is more time, effort, and energy invested in anticipating and assessing the negative impact of existing and future innovations. Undoubtedly, with technological advances, many obstacles are likely to emerge, especially in its early stages of maturity. Some technologies are going to disappear, while others emerge. As such, the most critical thing in mapping our future engagement with technology' in the context of social sciences is not only unpacking the ins and outs of that particular innovation. It is essential to think about how we relate to the innovation, and what can be done with it. We need to think about smart things as our companion species and act accordingly.

Strong Al, ethics, and privacy by design

In this book, I deliberately avoided the debate around AGI, while I did engage with the question of the morality of technology. This decision is somewhat controversial as the two issues are intertwined. Technology has moral relevance, and this issue will be even more pressing as machine learning and DFTs evolve. Researchers should endeavour to engage with both issues as they are critical for addressing and mitigating the impact of technology' on offending, victimisation, and criminal justice interventions. As discussed at length in the next point, we need to make sure technology' is not used for human rights violations and encroaching civil liberties, and that specific ethics codes are embedded into technology. As one of my computer science friends suggested, we might even need Chief Ethical Intelligence Officers that would operate in a range of business and government settings. Yet, just which trajectory' we ought to follow is less certain. As demonstrated in Chapter 5, there is no agreement among experts, coders, and other stakeholders around which strategy' to adopt when it comes to ethics in smart machines: preference, equality, or neutrality. This is another issue that warrants a debate, and if possible, a resolution.

We need to further the conversation by looking into standards that will guide the advances of these and other emerging technologies. Harnessing technology for good, while steering clear from false promises, needs to be at the forefront of research in STEM and social sciences. In doing so, we need to implement privacy by design approach and consider how we can utilise existing technologies, such as blockchain, that could address privacy intrusions and other shortcomings of DFTs. Privacy is likely to change, but the notion that privacy is the thing of the past needs to be reconsidered, if not contested. Implementing ethical principles into smart things is not the only branch of ethics academia should focus on. It is equally important to make sure that we discuss the moral dimensions of human-to-thing and thing-to-thing associations with people building and designing technology, as well as early adopters and other end users. This approach is complementary to our greater understanding of the ins and outs of technological innovations, as we should focus on how the technology works and how to use it for good.

Identifying, alleviating, and preventing harm and discrimination Advances in technology do not necesserily translate into a better, more equal world. While this book has only sporadically addressed matters such as the use of DFTs in a war, or designing technology to harm people, these are important issues to discuss in the future. Dilemmas raised by the use and justification of designing autonomous mobile robots for the primary purpose of killing human beings should worry us all. However, so-called ‘civilian’ use of technology is often harmful too, as witnessed in the context of border control. Drone stare and mechanical distance used to dehumanise and disrupt unwanted and unauthorised mobility is just one example that illustrates how complex and manifold harm that accompanies technology can be. Even when technology per se is not harmful, it might be prone to hacks and misuse. What is needed is a clear understanding of mistakes in current and future paths of technological development, and what—if any—‘collateral damage’ we are willing to tolerate. Outlining harm and human cost, often hidden behind benevolent technology narrative, ought to be a priority.

Robust legal frameworks that will identify and anticipate harms created by techno-social fusions, especially Al and machine learning technology, are critical for our future engagement with smart things. The vulnerability of artefacts and code, not just in relation to hacks and intrusions but also their actions as actants, decision-makers, and goal setters (especially goals that might not align with humans') must be at the forefront of future research and policymaking. Even if we do not believe that ‘crime harvest’ is going to happen with the growth of the loT and autonomous mobile robots, and if we dismiss the claim that the growth of ambient intelligence and Al-powered devices is a security disaster waiting to happen, the reach of smart things that might soon be implanted into our body requires careful consideration. ‘Mission creep’, where technology will serve not only to combat crime but those identified as unwanted or inherently deviant (such as illegalised border crossers and racial minorities), need to be prevented and/or disrupted. The current and future use of the loT and mobile robots for border security under the guise of ‘saving lives’, while such practices create seamless borders, is a narrative that needs to be deconstructed.

The commercialisation of digital frontier technologies

The money question is a critical issue that has not been explored in this volume. The reason for this omission is that I thought I could not do it justice in a short volume such as this book. This absence is a limitation as many experts, including leading scholars who reviewed this volume such as Professor Dean Wilson, consider this issue the most important in the technology-crime nexus. Technological artifacts, Professor Wilson suggests, might be an avenue by which capitalism spreads deeper into our everyday lives. Indeed, many technological advances discussed here might no longer serve the intended purpose (for good) and are not needed or wanted by consumers; however, they are imposed on us and marketed as inevitable. This concern is both legitimate and real, and as such, requires our attention.

Commercialisation currently drives much of big data and Al work; the narrative of advertising and selling things intersects with privacy and civil liberties and as such requires unequivocal consideration. We have seen a range of negative consequences of the privatisation of the prisonindustrial complex and is reasonable to expect similar scenarios here. There is little transparency' about how private companies sell their ‘crime prevention’ and security products (software and/or hardware) to governments and consumers. An issue linked to this is who controls and has access to our data. These could be the starting points in our investigation of commercialisation of DFTs and how such processes impact on practices and strategies of crime control.

Bringing human back to the human-thing alliance

The decision ought to be made on whether we need to harness technology to bring human agency back to the fore. There is a clear need for a public and scientific debate about this issue before we continue to develop technologies that further remove us from the equation. We cannot allow to find ourselves in a situation where turning off devices and machines amounts to our suicide. Irrelevance of humanity in the world of connected, learning things and machines is a reason for trepidation. Alternatively, maybe we should see ourselves as poets and artists of the past; as labour as we know it today will be delegated to things, while humans intellectualise and contemplate our existence, nature, and the future. Whatever scenario we pursue, well-defined human-thing protocols must be implemented, in which our place in the future Internet will be outlined with clarity and consensus. As Papacharissi (2019: Section 1, Introduction, para. 1) would have it, ‘[ijnevitably, the dreams and nightmares rendered by the limits of our human imagination revolve around the same theme: will technology fundamentally alter the essence of what it means to be human? And the answer, despite the countless narratives of anticipation and apprehension is, I find, the same: only if we permit it to do so’. I could not agree more.

Thus, it is time to join forces in scenario writing and plan future research, explore options and trajectories, predict the impact and consequences of approaching innovations, theorise using existing but also innovative frameworks, and lead the way in exploring and designing responses to offending and victimisation in the future Internet.

 
Source
< Prev   CONTENTS   Source   Next >