# Behavioural economics models

In recent decades, new behavioural economics models have been proposed and are facilitating a deeper exploration and alternative interventions to improve cybersecurity behaviour. These formal models of human behaviour provide an important theoretical framework. Recognising that decision-making is not entirely rational, behavioural economics models go beyond the classical approach of perfect rationality and maximisation of expected utility. Section 3.5.1 presents key behavioural economics models that can be used to describe cybersecurity behaviour.

In addition, behavioural economics also provides a methodology based on the application of experimental methods to test these theories and generate empirical evidence as to how cybersecurity decisions are actually made. These include Behavioural Economics Experiments (BEEs), which are introduced in Section 3.5.2, alongside an explanation of how to conduct such experiments.

## Models

In this section we present two key behavioural economics models that have been used to describe cybersecurity behaviour. First, we present Dual-Thinking Theory, a model for human decision-making set out by Nobel laureate Daniel Kahneman in his seminal book, *Thinking, Fast and Slow* (Kahneman, 2011). We then present Prospect Theory, which is a model to explain behavioural change that forms the cornerstone of behavioural economics. These models are both instrumental in order to understand BEEs.

Dual-Thinking Theory

Kahneman (2011) proposes a dual model for human decision-making that has important implications for decision-making involving cybersecurity. According to the model, all decisions (including of course those related to cybersecuritv) are made employing two fundamentally different modes of thought , called System 1 and System 2. Roughly speaking, System 1 thinking is fast, intuitive, associative, metaphorical, automatic, and impressionistic, and cannot be switched off. Its operations involve no sense of intentional control. System 2 thinking is slow, conscious, deliberate, and effortful. System 2 thinking bears a close resemblance to the rational agent (termed *econ)* in standard economic theory, which considers decision makers as *econs* able to optimise their decision-making.

Prospect Theory

Prospect Theory (Kahneman and Tverskv, 1979; Tversky and Kahneman, 1992) provides an economic model of behaviour under risk that proves especially useful for analysing cybersecurity behaviour (van Bavel et al., 2019). Prospect Theory departs from conventional economic models such as Expected Utility Theory in that: In Expected Utility Theory, a utility function *и* transforms objective outputs (for instance, monetary values) into their corresponding subjective values for the decision maker. Then, decision-making is determined by the optimisation of expected utility. The corresponding probabilities remain unchanged. As a simple example of how Expected Utility Theory works, let us consider that an agent is given the option to pay an amount *I* to participate in a game. In this game, she can obtain a net outcome a;,; with a known probability *p,* for *i* = 1...., n, where aq > *x%* > • • • > *x _{n}. *The net outcome is equal to the total outcome of the game minus the participation cost

*I.*

She will participate in the game if and only if *u(I)* is lower than the expected utility of the outcomes given by *Pi' ^{a}(x_{t}).*

By contrast, Prospect Theory considers that probabilities also need to be transformed before their consideration in the optimisation process. This transformation is done using the *weighting function,* denoted by *w.* The underlying idea is that, in the same way that an increase of € 1,000 in the output does not increase the utility by the same amount if the initial output is €0 or €10 million, an increase of 0.10 in the probability has a different impact on the decision weight if it applies to a probability of 0.01 or 0.30. To capture this critical behavioural effect, *w* is defined in terms of probability *ranks.* A rank, or more intuitively a *good-news probability,* for any potential outcome *x* is defined as the probability of obtaining an outcome strictly larger than *x.* Formally, *rank(x) = ^2 _{X}._{>X} ргоЬ(х_{г}),* and ranks are numbers between 0 and 1, where 0 is the rank associated with the best possible outcome and 1 is the rank associated with the worst.

Let us define a;_{n+}i = — oc. Then, the probability of outcome *х _{г}* can be written as

*Pi*=

*rank{x*—

_{l+}i)*rank(xi)*for

*i =*1,...,«. Given a weighting function

*w.*the

*decision weight*of outcome

*Xi*is defined as 7q =

*iv(rank(x*Notice that if the weighting function is the identity function, i.e.

_{t+}i)) — w(rank(xi)).*w(p) = p,*then the decision weights coincide with the probabilities of the outcomes,

*щ = p*Decision weights are positive numbers lower than one, but they are not required to add up to one. Decision weights are related to the slope of the weighting function: the steeper the weighting function is, the larger the difference between

_{L}.*w(rank(xi*and

_{+}i))*w(rank(xi))*and then the larger the corresponding decision weight 7Г,. Under Prospect Theory, an agent with utility function

*u(x)*and weighting function

*w(p)*will participate in the game if and only if

*u(I)*< ^"

_{=1}

*Щи(х<).*

*Implications*

Although a discussion comparing conventional and behavioural approaches to decisionmaking may seem too technical, it has relevant policy implications for cyber insurance. Indeed, Prospect Theory establishes that cyber insurance and cyber protection decisions are not made based on the actual risk of experiencing cyber attacks, as captured by the probabilities of suffering such attacks. Instead, these decisions are made in terms of decision weights.

Moreover, Prospect Theory provides the foundation for a methodology to estimate such decision weights in the form of BEEs. Note that this difference is not the result of agents having imperfect information regarding cyber risks, but of the psychological and cognitive mechanisms involving System 1 thinking. Decision weights do not coincide with risks, even if agents are informed and are able to accurately ascertain the value of those risks.

Two implications of the role of weighting functions in the field of cybersecurity are especially relevant. First, the decision as to whether or not to purchase a cyber insurance policy, and therefore the maximum premium that a potential client will pay for the policy, is conditioned by the shape of the weighting function. The estimation and calibration of this function, which can be performed using the Behavioural Economics Experiments presented in the next section, becomes a key tool to determine the optimal pricing of cyber insurance portfolios (Alventosa et ah, 2016). Second, the critical role of the weighting function in driving cybersecurity behaviour provides an opportunity to design interventions aimed at enhancing cybersecurity by changing the shape of this function.