Moving robots: Killing machines on the loose?

In 2018, a self-driving Uber car operating in autonomous mode killed Elaine Herzberg, a 49-year-old woman living in Tempe, Arizona. The panic in the media was palpable, although commentators were quick to point out that human drivers kill many humans in traffic accidents every single day (Piper, 2020). Elaine Herzberg’s death was compared to the death of Bridget Driscoll, the first pedestrian to ever be killed by a car in the United Kingdom in 1896 (Kunkle, 2018). For a while, Uber suspended testing of self-driving vehicles in Arizona. The questions everyone seems to be asking were: How could this happen? Is the technology ready? Drones, on the other hand, already have a reputation as ‘killer robots’, given their military application. As Hayes et al. (2014: 8) point out, ‘[f]ew technologies have captured the media’s attention like drones’. Perceived as the ultimate smart weapons, drones in the military context provide an advantage in information warfare; however, they are increasingly seen as ‘automated surveillance-military killing machines’ (Wilson, 2012: 274). We are now used to media reports about drone attacks like the one on general Qasem Soleimani, a top commander of the Iranian Army killed in Iraq in January 2020. Drones have caused thousands of military and civilian casualties in Pakistan, Syria, Afghanistan, Yemen, and Somalia (Wall and Monahan, 2011; Hayes et al., 2014).

The use of robots—in particular, drones in war—has been extensively scrutinised in media and academia (Wall and Monahan, 2011; Coeckelbergh, 2013; Chamayou, 2015; Wall, 2016). The predominant narrative around both military and civil drones is one of a hunt, with a mixture of sci-fi and Greek and Roman mythological names used to baptise the latest machines (Milivojevic, 2016). A superior, technology-powered enforcers from the sky, thus, ‘hunt and kill’ the adversary fighters and prospective terrorists or identify and immobilise illegalised border crossers. From the safe space, targets in hostile areas are monitored and eliminated, by chance or design (see Chamayou, 2015). The outcome is a ‘drone stare’—surveillance ‘that abstract people from contexts, thereby reducing variation, difference, and noise that may impede action or introduce moral ambiguity’ (Wall and Monahan, 2011: 239). Via drone use, people are reduced to targets, dead bodies to numbers, and civilian victims to collateral damage and ‘bug splats’ (Danchev, 2016).

Civil mobile robots tend to avoid such scrutiny. Still, as Leetaru (2019) notes, ‘[a]s we look into the future of driverless cars and autonomous delivery drones, one of the most existential questions of their future is how to prevent them from being used for harm’. In the future, AVs and drones will mostly be connected to one another and the infrastructure, via smart devices and sensors they carry. They will be prone to hacking and misuse and might break the law empowered by surveillance, commiting crimes such as stalking, family violence, child abuse, assault, drug trafficking, or terrorism (Boucher, 2016; Goodman, 2016). How is law enforcement to respond to these anxieties is discussed below.

 
Source
< Prev   CONTENTS   Source   Next >