Virtually Vulnerable: Why Digital Technology Challenges the Fundamental Concepts of Vulnerability and Risk

Introduction

The digital environment redefines the concept of vulnerability by challenging conventional theories of place, identity and situation. This is something that Yar (2013) refers to as the problem of ‘who’ and the problem of ‘where’. In the digital world traditional boundaries to vulnerability are removed - traditional understandings of geography cease to exist, and, therefore, abuse can occur from anywhere in the world. While abuse in the more traditional context might have occurred face-to-face or in a written form, digital environments explode the volume, and audiences for abuse and abusers can hide from view through anonymity, pseudonymity or even multiple online personas. By removing geographical boundaries, providing far more complex opportunity for anonymity and presenting the potential abuser with the opportunity to access potentially millions of victims, the online world challenges criminological norms. In this chapter, we argue that the online environment presents the potential for anyone to be vulnerable and that stakeholders - those with responsibility for safeguarding from practice to policy - routinely fail victims because their solutions fail to understand the wider social context.

According to Ofcom1 (2018) nearly 90 percent of adults in the United Kingdom (UK) are online, and this increases to 96 percent amongst 45-54s and to 98 percent amongst 16-24 year olds. Given that smartphones are more popular than a computer or tablet for going online (a continuing a trend first seen in 2016), Ling (2012) suggests that mobile internet technologies have become taken for granted in everyday life. As such our everyday lives, our identities, communities and relationships between the self and other are interwoven, increasingly seamlessly, with mobile internet technologies. Mobile technologies in the form of smartphones and tables, WIFI and 4G networks have transformed how we access online spaces enabling us to get online

Virtually Vulnerable 119 anywhere and ar any time and the previously private, ‘safe’ space of the home can be open to the outside world through mobile devices. People are contacted 24 hours a day, sent abusive messages, hateful content and images, viruses and scams and online privacy concerns are high on both public and policy agendas (Ofcom, 2019). Rapidly changing technological landscapes have blurred the boundaries between public and private spheres and dramatically altered the contours of risk in relation to self-identity in late modern society (Bond, 2014). Yet as Stokes (2010, p. 321) observes ‘conceiving of the ‘web’ as a dimension of reality rather than a separate space frees us from the traditional false dichotomies found in social science and the false analogy of ‘virtual’ and ‘real’. As such, ‘given the ubiquity of digital technology and con-nectively in contemporary society, an honest, empirically driven, and sociologically grounded discussion of these dynamics is long overdue’ (Barnard, 2017, p. 199).

The concern for safety in ‘virtual’ space is a very real one for those experiencing abuse, victimisation and online crimes. It is the ‘desire for safety and security has become one of the ways we justify ownership of the [mobile] device’ (Ling, 2012, p. 116) yet arguably in ameliorating risk and uncertainly they also simultaneously facilitate risk and risk anxiety in people’s everyday lives. Thus risk-profiling, a central part of modernity (Giddens, 1991), involves both human and non-human entities (Bond, 2014) and dominates both public discourse and private behaviours. The concept of risk is, according to Beck (1992) directly bound to reflexive modernisation and we suggest that online risks are a prime example of his risk society thesis. In late modernity, self-identity has become, according to Giddens (1991) a reflexively organised behaviour in which individuals make life choices about lifestyle and life plans. Concerns over identity theft, online scams, image-based abuse, fear of being stalked and harassed online or downloading a virus influence how we use the internet, how we behave online, what we post and share and why we invest in software to protect ourselves.

Beck (1992) and Giddens (1990, 1991) and their research on risk society provide a useful theoretical framework for examining the contemporary world, its hazards and social change which shape and influence political comment, policy direction and the lives and futures of individuals. The concept of risk has changed in late modern society from natural hazards to be understood as the unintended consequences of modernisation itself. Anxieties and uncertainly related to the internet and the associated technological developments in the form of mobile devices appear as consequences of technological advances. The concept of risk has dominated media discourses on mobile technologies and whilst much of the reporting reflects determinist approaches, it is important to remember that the relationship between risk and vulnerability invirtual environments is actually a highly complex one. Risk is helpfully defined by Beck as:

A systematic way of dealing with hazards and insecurities induced and introduced by modernisation itself. Risks, as opposed to older dangers, are consequences which relate to the threatening force of modernization and to its globalisation of doubt.

(Beck, 1992, p. 21)

As outlined above, the landscapes of risk have also changed as boundaries, for example, between public and private are increasingly blurred with the home remaining the main context of internet use. The affordances (see Hutchby, 2001) of mobile technologies have blurred the boundaries between time and space, public and private, human and non-human (Bond, 2010, 2014) and the internet, a media-based public sphere (Devereux, 2007) has become a ubiquitous technology - a taken-for-granted source of information, social interaction and community in everyday life.

Risk is, therefore, a social construction and:

an idea in its own right relatively independent of the hazard to which it relates. Risk is thus understood in relation to perception that is generated by social processes - such as representation and definition - as much as it is by actual experience of harm.

(Burgess et al., 2017, p. 2)

Actor network theory (ANT) offers a useful approach to our understanding here. In conceptualising society as produced by and through networks of heterogenous material, it is made up through a variety of human and non-human entities (Prout, 1996). ‘ANT is committed to demonstrating that the elements are bound together in a network (including the people) are constituted and shaped by their involvement with each other’ (Lee and Brown, 1994, p. 774). ANT as interdisciplinary can overcome the unhelpful dualisms proposed by modernist social theory to approximate an ‘ecological sociology’ (Murdoch, 2001) in order to understand the ‘intricate and mutually constitutive character of the human and the technological’ (Prout, 1996, p. 198) and, we suggest, the socially constructed nature of risk and vulnerability. Using ANT to conceptualise online risk and vulnerability helps to tease out the complexity of the relationships between the actants (see Latour, 1997) in the networks. ANT attempts to explain and interpret social and technological evolution using neither technical -material nor social reductionism, but rather it incorporates a ‘principle of generalized symmetry’, that what is human and non-human should be integrated into the same conceptual framework.

In overcoming vulnerability in responding to online harms and developing resilience the importance of adopting a holistic approach through a multi-stakeholder perspective is essential if we are to avoid the technological determinism of recent policy approached to online harms. Law (1991) distinguishes between previous approaches to society and technology as technological determinism (technical acts as explanation) or social reductionism (expression of social relations) and argues that it is a mistake to ignore the networks of heterogeneous materials that constitute the social. Our everyday lives, our identities, communities and relationships are interwoven with people and technology, with offline and online an, as such, we need to better understand the dynamic and multifarious nature on everyday life in late modernity.

Such an understanding needs to include a wider rights-based underpinning of the philosophical foundations of legal approaches to the constructions of abuse and harm online. We have argued elsewhere (Bond, 2014 and Phippen, 2017) for adopting a theoretical pluralism which constructing a conceptual framework in an attempt to understand the multifarious and complex network of hybridity and the wider, political and economic environment.

The model below (Figure 7.1) is adopted from Bronfenbrenner (1979) and is a development of our own stakeholder model for online child safeguarding (Bond and Phippen, 2019), and illustrates the broad range of influences, and the interactions between them when considering any adult who might become vulnerable to, or a victim of, online abuse.

Understanding vulnerability adapted from Bronfennbrener’s (1979) Ecology of human development

Figure 7.1 Understanding vulnerability adapted from Bronfennbrener’s (1979) Ecology of human development.

While the policy focus, as discussed below, tends to centre on technology providers and the role of mass media, as we can see from the model, there are many stakeholders who have closer influence over the individuals. Policy drivers also fail to acknowledge the importance of the mesosystems in interacting between different stakeholder layers.

Cybersecurity and online safety have become pressing concerns for policy makers, industry (including internet service providers and technology manufacturers), law enforcement and the general public. It is the notion of vulnerability to risk and potential harm that underpins the drive for protection, and importantly as perceptions of risk change, so do responses to risk management. Furthermore, as awareness increases with the dynamic nature of knowledge, the notion of risk becomes central to society and to individuals as they reflexively construct their own life biographies (Giddens, 1991). Thus, risk management has become part of everyday life and this includes managing risk online and with and through mobile devices. As risk is a social construction, perceptions of risk are underpinned by cultural, social and historical factors and just as risk is understood as a social construction, it is important to remember that the factors that lead to online abuse and internet-related crimes are complex and intertwined. For example, as children whose parents lack education or internet experience tend to lack digital safety skills which leaves them more vulnerable to online risk and teenage girls with sexual abuse history and depressive feelings are more at risk of internet-related sexual abuse (Goran Svedin, 2011). Awareness raising and educational initiatives to increase online safety and protection are based on motivating people to change their behaviours online as the most widely acknowledged factor associated with online vulnerability is risk-taking behaviour. However, the complexity and diversity of risks online means that such initiatives are not always effective and people can remain vulnerable to certain types of risk. For example, in relation to online abuse, whilst is it well known that those open to online sexual activities especially flirting and having sexual conversations with strangers are more likely to become victims of sexual harassment, solicitation or grooming, many victims of online sexual abuse have not engaged in such behaviours previously.

Online risk and vulnerability, therefore, affect everyone. Risk and risk management strategies have profound implications for self-identity especially for self-identity online in late modernity through risk-profiling and adopting risk taking behaviours. Online risks and responding to online risks connect the individual and society and the increasingly interventionist role of the state transforms social, legal and cultural constructions of everyday experiences of risk through that Foucault (1977) identifies and ‘disciplinary networks’ for spatial control.

Many of the discourses on risk and online protection have to date been dominated by child protection concerns and by constructions of

Virtually Vulnerable 123 childhood in late modernity based on notions of innocence, naivety and dependence. Policy initiatives based on protectionist ideals derived from adultist perspectives have defined public space and adult space where children’s participation is controlled and limited through formal and often legal restrictions. Jenks (2005) develops Foucault’s ideas of spatial control to suggest that the exercise and manipulation of space is a primary example of adult’s controlling children’s worlds and he suggests that the postmodern diffusion of authority has not led to a democracy but to an experience of powerlessness, which is not a potential source of identity but a perception of victimisation. However, as Lee (2001, p. 10) argues such ‘adult’ authority has been called into question:

So far, then, we have seen that adult authority over children, the ability of adults to speak for children and to make decisions on their behalf, has been supported by the image of the standard adult. We have also briefly noted that there are good reasons to be suspicious of the degree of authority that adults have, and that, in the light of these suspicions, adult authority has become controversial. But beneath this controversy, widespread social changes have been taking place that are bringing those forgotten questions of whether adults match up to the image of the standard adult to the fore. In fact, these changes are eroding standard adulthood. Over the past few decades, changes in working lives and in intimate relationships have cast the stability and completeness of adults into doubt and made it difficult and, often, undesirable for adults to maintain such stability.

Thus, adults and their assumed state of completeness is exposed as a falsehood and we see adulthood as a dynamic state which changes over time and space to include degrees of resilience and also vulnerability. Vulnerability, associated with both natural disasters and terrorists attacks (Misztal, 2011), applies to people’s everyday lives online. As the Ofcom report (2018, p. 1) observes,

although the internet seems ubiquitous, the online experience is not the same for everyone. Our research reveals significant differences, by age and by socio- economic group, in the numbers who are online at all, and in the extent to which those who are online have the critical skills to understand and safely navigate their online world.

People who are more vulnerable ‘offline’ are also more likely to be vulnerable online especially those experiencing abusive relationships relating to coercion and control. Having low self-confidence/self-esteem and being influenced by alcohol and/or drugs are also influential factors in both risk-taking behaviours and being at risk online. Furthermore, people with mental health issues and psychological difficulties tend toencounter more risk online and be more upset by it but it is important to remember that the absence of vulnerability is not fixed, but is a variable that can change very quickly. An understanding of online vulnerability is particularly important for those working with people who are understood to be more at risk than others (e.g., victims of domestic abuse) as they can specifically benefit from improvements in online self-protection and developing resilience through recognising and understanding how to respond to risk. Although some studies have identified specific characteristics associated with online vulnerability, the relationship between online and offline vulnerability remains a complex one as although some people who appear more vulnerable offline are also likely to be more vulnerable online, some may appear highly resilient offline but can be highly vulnerable online and vice versa. Thus, stereotyping is unhelpful and can even be dangerous in some cases presuming that because an individual does not appear to ‘fit’ with a risky profile that they are safe. Many victims of online abuse, for example, did not appear vulnerable previously but had strong friendships, good relationships and a high level of educational attainment but their public or professional identity ultimately made them vulnerable to trolling. Thus, online risks often manifest in unexpected ways.

Known individual vulnerability factors include, age, gender, socioeconomic status, level of educational attainment, self-efficacy, sexuality, ethnicity, experience of domestic abuse and/or sexual abuse, disability, emotional and/or behavioural difficulties, poor off line relationships, exclusion of access, use of drugs and alcohol. However, while individual risk factors, such as those outlined above, do not independently lead to vulnerability, accumulation of these or in combination are considered to increase a person’s vulnerability to online risk and harm. Furthermore, multiple long-term risk factors in day-to-day life interplay with trigger events which can result in the loss of protective factors and online behavioural risks and engaging in risk-taking behaviour. Vulnerable people are not a self-contained or static group and anyone could be vulnerable to some degree some time depending on any one, or a combination of, the risks or challenging life events they face and their resilience.

As the barriers of time and space are blurred, the anonymity of the internet and social media has been attributed to bullying, abuse and trolling behaviours which take advantage of opportunities for deception and disguise. Criminals can masquerade themselves to victims pretending to have a very different identity to fool others into parting with money, information and images and the ease of access which the internet allows results in 100 or 1,000 of victims potentially be reached. Therefore, from a criminological perspective, the affordances of both anonymity and the ability to create and change online identities are powerful mechanisms which facilitate offences being committed and abuse taking place. Whilst increasing numbers of individuals are potentially victims

Virtually Vulnerable 125 of online crime and abuse, it is the impact on business and the economy, however, which seemingly dominates the debates on cybercrime, as according to the National Cyber Security Centre and NCA (2018, p. 6):

With attackers able to achieve many of their aims by using techniques that are not particularly advanced, the distinction between nation states and cyber criminals has blurred, making attribution all the more difficult.

Furthermore, it is the blurring of geographical boundaries that make protection through legal control and regulation of the internet and digital content so problematic. ‘Although regulators have for years struggled with rising transnationality, in the forms of global trade and transnational corporations, the internet presents an entirely new dimension to the problems of squeezing transnational activity into the national legal straight jacket’ (Kohl, 2007, p. 4). As such traditional approaches to law enforcement and criminal justice are challenged as they struggle to remain effective and even relevant to online crimes and are similarly failing to protect victims whether they are individual citizens and or large-scale organisations which further contributes to both risk and risk anxiety in the late modern age.

We can see clear illustration of this in a statutory instrument emerging in the UK to attempt to control what is referred to by the Government as ‘online harms’. In April 2019 the UK Government released its ‘Online Harms’ White Paper (UK Government, 2019, p. 5) to much press coverage and ministerial comment on how it will ensure that the UK is ‘the safest place in the world to go online’.

It claimed to set out both the problem domain and solutions that included a regulatory framework, an independent regulator for ‘online safety’; the scope of companies within this framework; how enforcement might work; the role of technology and the empowerment of the end user. However, we feel it is more useful as a tool for exploring how, and why, nation states struggle to tackle online vulnerabilities and ‘protect’ citizens from digital harm and, arguably, result in an environment that will both fail to reach its goal and also introduce cultural normalisation that potential introduces greater risk of harm.

The white paper makes little effort to define what an ‘Online Harm’ actually is, aside from the following from the Ministerial introduction which states:

Online harms are widespread and can have serious consequences.

We should also point out that there is neither a clear definition of online safety - therefore, the statutory instrument fails to define either the problem domain or the solution prior to proposing a legislative framework toachieve it. Given that there seem to be two key themes in this paper - the legislative proposal and the ‘duty of care’ for organisations requiring them to implement technical solutions to these issues - the lack of clarity in the initial definition is concerning. Both law and algorithms (which, as Lessig, 1999 argues, defines the laws of cyberspace) require clear definition if they are to be successfully implemented, and this instrument has neither.

The rhetoric around safety and unacceptability of harmful content is set out from the outset of the paper without actually defining it:

The government wants the UK to be the safest place in the world to go online, and the best place to start and grow a digital business. Given the prevalence of illegal and harmful content online, and the level of public concern about online harms, not just in the UK but worldwide, we believe that the digital economy urgently needs a new regulatory framework to improve our citizens’ safety online.

Illegal and unacceptable content and activity is widespread online, and UK users are concerned about what they see and experience on the internet. The prevalence of the most serious illegal content and activity, which threatens our national security or the physical safety of children, is unacceptable. Online platforms can be a tool for abuse and bullying, and they can be used to undermine our democratic values and debate. The impact of harmful content and activity can be particularly damaging for children, and there are growing concerns about the potential impact on their mental health and wellbeing.

(p.5)

And we might observe that this seems to the starting point for the ambiguity and lack of focus that is to come throughout the document. Digital technology has achieved immeasurable impacts on our everyday lives, but is it possible to address every possible concern in a single legislative intention? There is also a fleeting reference to the intention to tackle both ‘illegal’ and ‘unacceptable’ harms and the paper defines a wide range of harms, from those with an apparent clear definition to those that are more ambiguous:

  • • Harms with a clear definition, including child sexual exploitation and abuse, terrorist content and activity, organised immigration crime, modern slavery, harassment and cyberstalking, encouraging or assisting suicide, incitement of violence, content illegally uploaded from prisons and the sexting of indecent images by under 18s.
  • • Harms with a less clear definition, including cyberbullying and trolling, extremist content and activity, disinformation, advocacy of selfharm and the promotion of Female Genital Mutilation (FGM).

• Underaged exposure to legal content - children accessing pornography, and children accessing inappropriate material (including under 13s using social media and under 18s using dating apps; excessive screen time).

While we do not wish to dwell upon the debate around legal access to social media platforms, in passing we reiterate once again that the ‘law’ around young people’s access to social media is defined under advertising (Federal Trade Commission, 1998) and data protection (The European Union, 2018) legislation, not harm. The white paper continues to place expectations on technology providers to ensure ‘harms’ do not happen, without actually defining the clarity needed for code to achieve this. However, more specifically, the state approach to ‘protection’ resides in the control on information flow and restriction of access, rather than placing a broader social perspective of tackling underlying criminal behaviours. We might argue that this perspective, of itself, negatively impacts upon victims because there is an ideological position that one can control the online world and prevent harm from occurring - one can make oneself ‘safe’, and therefore if they fail to become safe, the failing lies with them.

While we are bound to explore these issues in a single chapter, and are, therefore, unable to explore multiple examples of harm and vulnerability, an effective exploration of the failures of such prohibitive approaches can be shown through examining victimisation and vulnerability on revenge pornography. Within this single example, we can consider how social discourse already fails the vulnerable by way of categorising these behaviours as something other than what they actually are, which is forms of domestic abuse through digital technologies and also how legislative ‘solutions’ are bound to fail victims.

The term ‘revenge pornography’ has only emerged in recent times (as can be seen in Figure 7.2 below)

However, we would also take exception with the term revenge pornography in general, given that most acts described as such rarely include a vengeful or pornographic element. If we take definitions of each word in turn:

Revenge:2 the action of hurting or harming someone in return for an injury or wrong suffered at their hands.

By using the term ‘revenge’ to talk about these crimes, we are, in essence, providing some reason or excuse for this behaviour. The perpetrator is taking revenge on the victim, for a slight the victim has performed on them. However, given one of the primary motivations for the non-consensual sharing of indecent images is the breakdown of a relationship, it seems utterly disproportionate to suggest that a fair response to

523ggggÎÎÜtt§2§ggS000"rtHN,4Nm"1"1','’'’innq'i'û'J!,','r'io'':03tT'

§§§§§§§§§§§§§Й§§§§888Я88§§8Я§Я§8§8§Я§Я§Я§Я8Я8Я§Я

Figure 7.2 Revenge Pornography Google Trend.

someone end a relationship is to publicly share indecent images of the without their consent. This is not revenge, it is abuse.

Pornography:3 printed or visual material containing the explicit description or display of sexual organs or activity, intended to stimulate sexual excitement.

In the second part of the definition, while we might argue that selfproduced sexually explicit material might have been produced to as a form of a gift (see Mauss, 1966) to represent intimacy and/or stimulate sexual excitement, the intention of the individual sharing the images/ materials in a non-consensual way is not for sexual excitement, it is to harm, to embarrass, to shame, and to hurt the subject of the images. While sexual excitement might arise for those viewing these images, it is not the intention of the poster.

The early recognition of revenge pornography as a term related to websites that provides the facilities for ex-partners to post up indecent materials of people that had been in relationships with. These ‘revenge’ websites, such as IsAnyoneUp, IsAnyBodyDown or UGot-Posted, marketed themselves as pornography sites with ‘user driven content’. However, much content posted in these sites would invite indecent images of an individual where contributors would linked to people’s social media profiles and online identity. While these images were frequently either nude or images of victims engaged in sexual acts, the motivation for the websites were financial for the site owners.

Before being shut down by the FBI (as a result of an investigation that many of the images were obtained by hacking victim’s email accounts, rather than having them donated by anonymous contributors) it was estimated that IsAnyoneUp generated $13,000 per month (Gold, 2011) from advertising revenue. IsAnyBodyDown offered a takedown service (FTC, 2015) for victims where, in exchange for payment, the website would offer but then fail to take down images of them shared without consent. Since this high-profile emergence of the concept of mass shaming, the behaviours, as discussed below, have evolved to encompass a wide range of what should more correctly be referred to as image-based abuse or the non-consensual sharing of indecent images. The impact on victims is severe and makes them highly vulnerable to further harm. Moreover, a lack of law enforcement understanding (Bond and Tyrell, 2017) and public opinion related to these acts will, undoubtedly, compound the harm further.

We should acknowledge the power the possession or existence of these images have over the victim. If one is in possession of indecent images of another, it is understandable that they will not wish for those images to be shared. Therefore, the affordances of the technology (Hutchby, 2001) and the threat of sharing these images allows control over the victim, to either coerce them into behaviours they would be unwilling to do without this coercion (for example, sending further images), or to blackmail or further harass the victim. It is perhaps more powerful to hold the images with threat of sharing, rather than actually sharing due to the control the abuser can hold over the victim.

However, impact on the victim can be severe and, by way of illustration, quotations from victims who contacted the Revenge Porn Helpline4 show the extent of harm and vulnerability:

I became very depressed and vulnerable. This affected my social life as I rarely went out just in case someone recognised me. It has caused me to have trust issues, the incident happened quite a while ago but I have not been able to stay in a stable relationship since due to my trust issues, it causes a lot of arguments’. I also live in constant fear that there may be another image out there, or that it will happen to me again.

I felt like I had my dignity ripped away from me, this affected my job as I often took time off as a result, I was too scared to leave the house and I did not want people to find out at work. I eventually lost my job due to the time off and paranoia, I felt like every time someone looked at me it was because they had seen the images.

The two above quotes highlight the impact of the non-consensual sharing of indecent images, the humiliation that follows, and the long-term nature of that impact. These quotes reflect a number of issues that are reflected in the literature, which categorises both psychological and professional identity.

Citron and Franks (2014) develop further categorisations on psychological impact, raising concerns around anxiety that can possibly lead to suicidal tendencies, highlighted by the recent tragic suicide of Veronica Rubio (Patel, 2019). The harms that can arise from the non-consensual sharing of images cannot be underestimated and can be as wide ranging as personal safety; body image anxieties; long-term trust issues and, as illustrated in the quotes above, the ongoing anxiety of not known where images have been posted, whether new images will emerge, or to who or how many people have seen them. We need to start referring to ‘revenge pornography’ for what it is - domestic abuse, exploitation, coercion and harassment.

When considering whether there might be a stereotypical victim of such behaviours, the literature victim profiling, as discussed above, can focus on women as the abused and males as offenders. However, in an analysis the cases brought to the Revenge Porn Helpline one thing that is that there is no ‘typical’ revenge pornography scenario or client. While they deal with many cases current discourses consider the ‘usual’ in terms of modus operandi (i.e., an ex-partner non-consensually shares indecent images of the victim either onto a public online space or to targeted private individuals), they deal with as many that do not fit into this category (Bond and Tyrell, 2018).

Analysis of the helpline data highlighted the following key issues:

  • • people contacted the service because they felt they had nowhere else to turn
  • • they have exhausted options themselves, which may include pleading with the offender, contacting law enforcement and service and platform providers and talking to friends and family
  • • the key trigger for them contacting the service was after the images had been shared
  • • whilst they felt that they could have dealt with the issues when an abuser was threatening to share images, once they had moved from private threat to public online space, the victim felt that they were no longer in control and had no idea how to manage the disclosure
  • • they turned to the helpline in order to try to get the images removed, because their own efforts were often in vain.

Though further research that we have conducted with the helpline has highlighted very strongly from discussions with the both staff and the exploration of over 2,000 cases that for many victims they were not subject to the sharing of an image but had the anxiety of the threat to share. For example, the offender having received images from the victim and

Virtually Vulnerable 131 they would then use the threat to share as a means to coerce or exploit the victim further sometimes over considerable periods of time and unless they sent more images, or engaged in sexual acts with the offender, the images would be posted online or shared with people known to the victim (mainly employers or family). Therefore, the threat of sharing is more powerful than the act of sharing itself - once the image has been shared, the offender has reduced their power over the victim because the impact of sharing has been achieved. However, this is not always the case, particularly when a campaign of abuse is conducted by the offender, then the image will be shared multiple times with multiple targets. The helpline talked about repeat callers who had been subjected to sharing and re-sharing of images - in one case the offender would share the images with every new employer the victim had. In these cases, a repeated revictimisation occurs - every time the images are viewed by a new third party, the harm and shame are repeated as the victim experiences these harms again. As one victim said:

it feels like im being raped online every day.

Moreover, there is another seriously distressing impact around not knowing who has seen the images and where they have ended up. And while the images may have been shared once, unless the offender is tackled for their behaviours, there is no guarantee that images will not be shared again. The uncertainty is another aspect of harm that is lacking in a lot of the discussion and literature around revenge pornography. With the focus on the non-consensual act of sharing, there is an assumption these are one off offences. In reality, victims can be subject to abuse many times, and from many difference perspectives. To repeat part of the quotation from the victim of revenge pornography from earlier in the chapter:

I felt like every time someone looked at me it was because they had seen the images.

The online dissemination of images, and the lack of control of the image once on image is either shared with multiple parties or posted in a public online space, means that the audience for viewing is entirely unknown. With digital images sharing once does not prevent further sharing, unless the offender is challenged on their behaviour and taken to task with what they have done. There is a crucial need for legislation to protect victims because, in a lot of instances, offenders feel they can do what they like with no challenge to their behaviour. Once images are in their possession, there is persistent threat to the victim. Once images have been shared once, there is no guarantee the sharing of images can be controlled further or who ends up seeing them.

While legislation to tackle revenge pornography was introduced in 2015, as part of the Criminal Justice and Courts Act 2015 s33 (UK Government, 2015), stating:

1 It is an offence for a person to disclose a private sexual photograph or film if the disclosure is made—

a without the consent of an individual who appears in the photograph or film,and

b with the intention of causing that individual distress.

The legislation failed to address issues of threat, victim impact or even, as a result of it being classified as a communication, rather than a sexual crime, anonymity of victim. Therefore, while there was now potential for an abuser to be charged for an abusive act, there was also the risk of revictimisation through a lack of anonymity. The absence of threat in the legislation meant that an abuser had to actually send or post an image before being capable of being charged. Furthermore, the intent to harm focusses upon the motivation of the abuser, rather than the impact upon the victim. Therefore, the abuser need only state the intention was not to cause distress to provide a solid defence against this legislation. And while we would question why anyone would attempt to justify the non-consensual sharing of indecent photographs in order to achieve anything other than distress, the defence exists in the legislation. Moreover, the state view of the crime being communication based, rather than sexual, also highlighted the ideological position that technology is to blame, rather than social practices. And therefore, compounding the expectation that, in some way, without technology, the vulnerability would not exist. Moreover, guidance from within the Criminal Justice System (Crown Prosecution Service, 2017) around coercion and control makes like reference to digital or online elements that might exist, again isolating the act of image sharing from the wider context of domestic abuse.

Recent news reporting from a BBC investigation (BBC News, 2019) further demonstrated that victims, the vulnerable, had little confidence in this legislative, measure, not just because the legislation was not fit for purpose, but because the broader stakeholder community (for example, police, social care, education professionals and employers) did not respond effectively to the abuse. The reporting highlighted concerns raised by our own work with the Revenge Pornography Helpline and working directly with victims (Bond, 2015) that many times victims will go to employers, friends and police (Bond and Tyrell, 2018) and be met with judgement rather than sympathy. Helpline practitioners have recounted stories of how victims have been told ‘if you hadn’t taken the images, he wouldn’t have been able to share them’. The recent coverage has highlighted a very important issue considering how states might address online vulnerability for their citizens; legislation is not enough if stakeholders fail to engage with the wider social conditions.

According to Ofcom (2018) the majority of adults in the UK continue to say that for them the benefits of the internet outweigh the risks. However, they also mentioned that constant connectivity can be overwhelming as nearly half of internet users over the age of 16 reported seeing hateful content online in the past year. In the outer layer, the Macrosystem, it is the wider attitudes and ideologies that can influence online vulnerability as understanding human rights to privacy and to respect for private life, family life, your home and your correspondence. Social and cultural norms also change but influence what it considerable acceptable and appropriate behaviour and content online which impact on constructions of risk. In the exosystem, mass media, the legal frameworks, government policy and national agencies are influential as they offer and provide laws and law enforcement for online crime. Understanding what is legal and illegal online can also help individuals manage and respond to risks. The microsystems cut across many different types of relationships from interactions with the constabulary, for example, and national help and advice lines to friendships and memberships of cultural groups to closer ones with friends and family. The individual’s vulnerability online is influence by all these relationships and the interactions between the layers - the mesosystems - and by the risk and resilience factors unique to that person.

Yet legislative solutions will in general focus on the exosystem and fail to understand that the closer to the individual the stakeholder is, the more effective they can be in addressing vulnerability and harm.

Returning to the Online Harms White Paper, we can see an ideology of technological intervention:

There is currently a range of regulatory and voluntary initiatives aimed at addressing these problems, but these have not gone far or fast enough, or been consistent enough between different companies, to keep UK users safe online...

...The UK will be the first to do this, leading international efforts by setting a coherent, proportionate and effective approach that reflects our commitment to a free, open and secure internet.

...We want technology itself to be part of the solution, and we propose measures to boost the tech-safety sector in the UK, as well as measures to help users manage their safety online.

..Tackling harmful content and activity online is one part of the UK’s wider ambition to develop rules and norms for the internet, including protecting personal data, supporting competition in digital markets and promoting responsible digital design.

The position follows a policy direction that can be seen to allegedly take shape in 2012 (Phippen, 2017). That direction focusses upon the use of technology to ‘solve’ issues related to online protection and safeguarding. The view being that given the online environment provides the access to these harms, the technology must also be able to provide the solution to prevent these things from happening. As we have illustrated in Figure 7.2, technology providers exist far away from the individual, in the exosystem, and will therefore only be able to have limited value to addressing the vulnerabilities of the individual.

These pro-active technological interventions, which began with filtering approaches that would prevent access to harmful content, have already been something of concern to the United Nations, with the ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ (UN Human Rights Council, 2018) stating that:

States and intergovernmental organizations should refrain from establishing laws or arrangements that would require the “proactive” monitoring or filtering of content, which is both inconsistent with the right to privacy and likely to amount to pre-publication censorship.

Nevertheless, the UK Government’s 2019 position on protection of the victim is an increased focus on pro-active and interceptional content moderation from those businesses within the scope of the proposed regulation. The approach fails to acknowledge the lack of robust empirical evidence relating to “online harms” and therefore, doesn’t adequately explain how they intend to prioritise regulatory action to address the harms that have the greatest impact - it would seem, from the paper, that all “online harms” are equal and should be treated as such. The failure to critically consider the conceptual underpinnings of the inclusion of harm versus crime results in a lack of clarity relating to definitions of harm. For example, if we are to consider this ideological position to those made vulnerable through the myriad approaches to image-based abuse, we might question the role of the service/technology provider in reducing harm. While these platforms may provide locations for an image to be posted, or the technology to share them, they have no control over the behaviour of the abuser to threaten, to control, to repost on a different platform, or to harass the victim further.

Each one of the online harms defined in the paper are complex issues which require holistic approaches with a deep understanding of the many potential social variables and interfaces that result in an ‘online harm’ happening, yet throughout the paper this is ignored and ‘sticking plaster’ solutions put forward which fail to address the fundamental issue that different harms require specific and appropriate responses -responding to ‘harmful’ content is not the same as tackling gang culture or predatory grooming online.

Throughout the White Paper the discourse on the intention to adopt a ‘risk-based approach’ is flawed and is based on rhetoric rather than

Virtually Vulnerable 135 reality. The language and tone treat all individuals as equal and a single, passive entity. For example, it states (p. 85): ’Users want to be empowered to manage their online safety, and that of their children, but there is insufficient support in place and they currently feel vulnerable online’, which completely fails to acknowledge that children are users in their own right and depicts children as ‘passive’ consumers of internet content rather than ‘active’ users and contributors to the digital economy.

Perhaps the biggest failing of the white paper is that it has moved beyond Internet Safety to try to encompass anything that might constitute an online harm, then present a single solution to address all of them. Within the introduction to the white paper (p. 6), the Ministers with responsibility stated the paper relates to the:

UK’s wider ambition to develop rules and norms for the internet.

Is it really the UK government’s place to develop rules and norms for the Internet, a global technological infrastructure, as a whole? We question the motivations of nation states to try to impose their rules upon an environment that transcends geographical and legislative boundaries, particularly when the perspective seems to view online harms as entirely technologically facilitated, and therefore digitally solved. These policy directions demonstrate a failure to understand both the nature of technologically facilitated abuse, and the broader social context in which it takes place. We are reminded of Marcus Ranum’s much quoted Law (Cheswick et al., 2003, p. 202):

You can’t solve social problems with software.

We would argue that failure to appreciate this results in the vulnerable being failed by policy and legislative solutions. If we are to consider the nature of image-based abuse the technology, and control of technology, will never be able to prevent harm. We absolutely need technology and platform providers to play their part - implementing reporting and take down mechanisms that are effective, providing the means to block abusers on their networks, and working with law enforcement to provide evidence. However, we also need legislation that is fit for purpose and understands the social dimensions in while the harm take place, and those with safeguarding or law enforcement responsibilities to appreciate the impact on victims and not cast judgement on their behaviour.

Conclusions

Digital technology clearly redefines the nature of vulnerability and arguably results in an environment where anyone can be vulnerable. However, digital technology is not the cause of this redefinition of the vulnerable,

it is merely the passive conduit through which abuse occurs. Due to the opportunities afforded by technology, the reach of the abuser can be extended, and they have many tools to make use of, such as the perceived anonymity or dispersed presence. However, abuse, and the impact upon victims, are human behaviours and responses. While counter measures can be introduced through code, code cannot make judgements, make inference or understand context. It has to follow rules, and those rules have to be logical and literal. On the 25th anniversary of the World Wide Web in 2014, Tim Berners Lee, its founder, was interviewed about what he thought were the important policy issues facing digital technology in the future (Kiss, 2014). In a wide-ranging interview, among other things, he stated, 'we need our lawyers and our politicians to understand programming, to understand what can be done with a computer’.

We suggest that five years after this statement was made there is still no evidence that this is the case. It is far easier for a policy maker to point the finger at someone else than to appreciate the role they have in challenging abusive and complex social behaviour. Until we move beyond technological intervention as the solution to protecting victims of online abuse and appreciate the relationships between stakeholders and the roles they play in this protection, as well as challenging abusive behaviours, we would suggest that legislation and the wider criminal justice system will continue to fail the vulnerable, whomever and wherever they may be online.

Notes

  • 1 The UK’s communications regulator -www.ofcom.org.uk/.
  • 2 https://en.oxforddictionaries.com/definition/revenge.
  • 3 https://en.oxforddictionaries.com/definition/pornography.
  • 4 https://revengepornhelpline.org.uk/.

References

Barnard, S. (2017). ‘Digital Sociology’s Vocational Promise’. In Daniels, J. Gregory, K. and McMillan Cottom, T. (Eds.) Digital Sociologies, Bristol: Policy Press, pp. 195-210.

BBC News (2019 May 19). ‘Revenge Porn Laws “Not Working”, Says Victims Group’, www.bbc.co.uk/news/uk-48309752

Beck, U. (1992). Risk Society towards a New Modernity. London: Sage.

Bond, E. (2010). ‘Managing Mobile Relationships - Children’s Perceptions of the Impact of the Mobile Phone on Relationships in their Everyday Lives’. Childhood, Vol. 17 (4), pp. 514-529.

Bond, E. (2014). Childhood, Mobile Technologies and Everyday Experiences. Basingstoke: Palgrave.

Bond, E. (2015). Understanding Domestic Abuse in Suffolk: Understanding Survivors experiences. Ipswich: UCS with Suffolk OPCC.

Bond, E. & Phippen, A. (2019). Why Is Placing the Child at the Centre of Online Safeguarding So Difficult? Entertainment Law Review, Vol. 30 (3), pp. 80-84., 3.

Bond, E. & Tyrrell, K. (2018). ‘Understanding Revenge Pornography: A National Survey of Police Officers and Staff in England and Wales’. Journal of Interpersonal Violence. DOI: 10.1177/0886260518760011

Bronfenbrenner, U. (1979). The Ecology of Human Development: Experiments by Nature and Design. Cambridge, MA: Harvard University Press. ISBN 0-674-22457-4.

Burgess, A., Wardman, J. Sc Mythen, G. (2018). ‘Considering Risk: Placing the Work of Ulrich Beck in Context’. Journal of Risk Research, Vol. 21 (1), pp. 1-5, DOI: 10.1080/13669877.2017.1383075

Cheswhick, W. R., Bellovin, S. M., and Rubin, A. D. (2003). ‘Firewalls and Internet Security: Repelling the Wily Hacker’. Addison-Wesley Professional, pp. 202-. ISBN 978-0-201-63466-2.

Citron, D. & Franks, M. (2014). ‘Criminalizing Revenge Porn’. Wake Forest Law Review, Vol. 49, pp. 345-391.

Crown Prosecution Service (2017). ‘Controlling or Coercive Behaviour in an Intimate or Family Relationship’, www.cps.gov.uk/legal-guidance/ controlling-or-coercive-behaviour-intimate-or-family-relationship

Devereux, E. (2007). Understanding the Media (2nd Edn.). London: Sage.

The European Union (2018). ‘General Data Protection Regulation (GDPR)’. https://gdpr-info.eu/

Federal Trade Commission (1998). ‘Children’s Online Privacy Protection Rule (“COPPA”)’. www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/childrens-online-privacy-protection-rule

Federal Trade Commission (2015). ‘Website Operator Banned from the “Revenge Porn” Business after FTC Charges He Unfairly Posted Nude Photos’. www.ftc.gov/news-events/press-releases/2015/01/website-operator-banned-revenge-porn-business-after-ftc-charges

Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. London: Penguin.

Giddens, A. (1990). The Consequences of Modernity. Cambridge: Polity Press.

Giddens, A. (1991). Modernity and Self-Identity: Self and Society in the Late Modern Age. Cambridge: Polity Press.

Gold, D. (2011 November 10). ‘The Man Who Makes Money Publishing Your Nude Pies’, www.theawl.com/2011/ll/the-man-who-makes-money-publishing-your-nude-pics/

Goran Svedin, C. (2011). ‘Research Evidence into Behavioural Patterns Which Lead to becoming a Victim of Sexual Abuse’. In Ainsaar, M. and Loof, L. (Eds.) Online Behaviour Related to Child Sexual Abuse. Literature Report: Robert, 37-49. www.childcentre.info/robert/public/Online_behaviour_related_ to_sexual_abuse.pdf

Hutchby, I. (2001). ‘Technologies, Texts and Affordances’. Sociology, Vol. 35 (2), pp. 441-456.

Kiss, J. (2014). ‘An Online Magna Carta: Berners-Lee Calls for Bill of Rights for Web’, www.theguardian.com/technology/2014/mar/12/online-magna-carta-berners-lee-web

Kohl, U. (2007). Jurisdiction and the Internet. Cambridge: Cambridge University Press.

Latour, B. (1997). ‘The Trouble with the Actor-Network Theory’. Danish Philosophy Journal, Vol. 25, pp. 47-64.

Law, J. (1991). ‘Introduction: Monsters, Machines and Sociotechnical Relations’. In Law, J. (Ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge.

Lee, N. (2001). Childhood and Society: Growing Up in an Age of Uncertainty. Buckingham: OUP.

Lee N. & Brown, S. (1994). ‘Otherness and the Actor Network: The Undiscovered Continent (Humans and Others: The Concept of “Agency” and Its Attribution)’. American Behavioural Scientist, Vol. 37 (6), pp. 772-790.

Lessig, L. (1999). Code and Other Laws of Cyberspace. New York, USA: Basic Books, Inc.

Ling, R. (2012). Taken for Grantedness: The Embedding of Mobile Communication in Society. London: MIT Press.

Mauss, M. (1966). The Gift; Forms and Functions of Exchange in Archaic Societies. London: Cohen & West.

Misztal B.A. (2011). The Challenges of Vulnerability: In Search of Strategies for a Less Vulnerable Social Life. London: Palgrave Macmillan.

National Cyber Security Centre andNCA (2018).‘The CyberTreat to UK Business’. https://nationalcrimeagency.gov.uk/who-we-are/publications/178-the-cyber-threat-to-uk-business-2017-18/file

Ofcom (2018). ‘Adults Media Use and Attitudes Report’, www.ofcom.org. uk/__data/assets/pdf_file/0011/113222/Adults-Media-Use-and-Attitudes-

Report-2018.pdf

Patel, B. (2019 May 29). ‘Mother of Four-Year-Old and Nine-Month-Old Baby Hanged Herself After Jealous Lover Sent their Sex Tape to Her Colleagues’. www.dailymail.co.uk/news/article-7083597/Mother-hanged-jealous-lover-sent-sex-tape-colleagues.html

Phippen, A. (2017). Children’s Online Behaviour and Safety Policy and Rights Challenges. Basingstoke: Palgrave.

Prout, A. (1996). ‘Actor-Network Theory, Technology and Medical Sociology: An Illustrative Analysis of the Metered Dose Inhaler’. Sociology of Health and Illness, Vol. 18 (2), pp. 198-209.

Stokes, P. (2010). ‘Young People as Digital Natives: Protection, Perpetration and Regulation’. Children’s Geographies, Vol. 8 (3), pp. 319-323.

UK Government (2015). ‘Criminal Justice and Courts Act 2015’. www.

legislation.gov.uk/ukpga/2015/2/section/33/enacted

UK Government (2017). ‘The Digital Economy Act 2017’. www.legislation.gov. uk/ukpga/2017/30/contents/enacted

UK Government (2019). ‘Online Harms White Paper’. https://assets.publishing. service.gov.uk/government/uploads/system/uploads/attachment_data/file/ 793360/Online_Harms_White_Paper.pdf

UN Human Rights Council (2018). ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression’. http://daccess-ods.un.org/access.nsf/Get?Open&DS=A/HRC/38/35&Lang=E

 
Source
< Prev   CONTENTS   Source   Next >