Individual decision-making within an organisation

A significant factor behind poor cybersecurity within organisations is that individual employees behave in insecure ways, i.e. users’ lack of “secure” behaviour may leave the company vulnerable to cyber attacks. In an analysis of security breaches reported across different sectors, 64% of incidents were judged to be “likely” due to human error (Evans et ah, 2018) and humans are a major target for cyber attacks (Marinos and Lourenyo, 2019).

Individuals are key stakeholders in the cybersecurity arena, not only as victims but also as defenders against attacks. However, not all individuals recognise or acknowledge their responsibility to protect the network. It thus becomes increasingly important to address the human component of cybersecurity within organisations and motivate the workforce to be better defenders. This is sometimes taken to mean that we should improve everyday computing practices, but the situation is more complex than this: We need to understand more about the way that managers and employees behave within an organisation when it comes to cybersecurity and the ways that we might encourage, or nudge, better behaviour.

Unfortunately, there is a lack of reliable behavioural data on users’ cybersecurity decisions and actions (van Bavel et ah, 2019). What users say they understand and do is not necessarily the same as what they actually understand and do. This is a problem for a literature that is heavily reliant upon survey methodology. Users may report awareness in surveys but might not have the skills or inclination to carry out associated actions. This poses a real challenge to researchers addressing the human component of cybersecurity decision-making in general and decision-making around cyber insurance in particular.

There is a stronger literature around attitudes towards cybersecurity (Bulgurcu et ah, 2010). When questioned, individual employees often show a resistance towards complying with their organisations’ information security policies and a negative attitude towards cyber insurance uptake. Factors that underpin such attitudes can be grouped into three categories: (i) failure to admit the risk or take responsibility for it, due to misplaced confidence; (ii) overstretched staff resources, leading employees to feel that cybersecurity procedures are overly burdensome; and (iii) the influence of the immediate social environment in cybersecurity decision-making. Each of these factors is elaborated upon below.

Failure to admit the risk

Employees can show a range of inaccurate perceptions of risk. This includes cognitive biases known to affect decision-making, including optimism bias (e.g. “It won’t happen to me”) and confirmation bias (e.g. “We haven’t been attacked yet, so we are well protected”). This sense of not feeling personally at risk is sometimes accompanied by the over-optimistic belief that “only amateurs fall victims to attacks” (Sasse and Flechais, 2005; Pfleeger and Caputo, 2012). People can easily become complacent when it comes to risk. For example, Miyazaki and Fernandez (2001) found that those who spent more time on the internet tend to have lower levels of concern over privacy and fewer worries about online purchases. In the absence of adverse events, and therefore a lack of learned experience, consumers perceive online risks to be overstated and/or sensationalised by the media.

Users’ beliefs about their susceptibility to an attack directly impact their motivation to behave securely. To illustrate this, Davinson and Sillence (2010) found that training interventions around phishing failed to improve secure behaviour unless people changed their views about their own vulnerability.

We find evidence that employees can be both overconfident and underconfident in their own ability to protect against cyberattacks. On the one hand, Furnell (2007) found that people have a tendency to overestimate their ability to behave securely online. In their study, respondents claimed to have a good awareness of cybersecurity threats and safeguards, yet there were a number of areas in which they left themselves vulnerable. The overconfidence reflects a control bias in the sense that people feel they can exercise more personal control over their environment than is justified by the data (e.g. “I could easily detect a threat” or “I wouldn’t fall for a scam”) (Pfleeger and Caputo, 2012). This is a concern, as the more confident people are, the more risky their cyber behaviour becomes (Campbell et ah, 2011; Weinstein, 1980). This notion is reinforced by Aytes and Connolly (2004), who found that knowledge of security t hreats was a poor predictor of students’ attempts to mitigate t hreats, as even those who were aware of the dangers still engaged in unsafe computing practices. They concluded that providing information through awareness training was therefore not sufficient in itself to change behaviour.

On the other hand, underconfidence links to a well-known literature around a lack of computer self-efficacy. In such instances, people express doubts in relation to their ability to comply with information security policies or worry about their lack of access (actual or perceived) to the necessary organisational resources, e.g. training, policies, etc. In other cases, people may simply lack the skills to protect themselves. Furman et al. (2011) found that users were aware of and concerned about online and computer security but lacked a complete skill set to protect their computer systems, identities, and information online, when measured through a comprehension test.

Individual decision-making within an organisation

 
Source
< Prev   CONTENTS   Source   Next >