Discretion in police decision-making

Police work involves considerable autonomy and discretion (Lister and Rowe, 2015), not only around strategic and policy matters but also for day- to-day operational decisions often taken by lower ranks (Wilson, 1978). Such discretion recognizes the fallibility of various rules with their field of application (Hildebrandt, 2016).The first Principle of the College of Policing’s Authorised Professional Practice on ‘Risk’ states that “the willingness to make decisions in conditions of uncertainty (i.e. risk taking) is a core professional requirement of all members of the police service” (College of Policing Authorised Professional Practice).This discretion is not unlimited however and public law expects discretion to be exercised reasonably, and the duty to enforce the law upheld (R v Metropolitan Police Commissioner ex. P. Blackburn [1968J 2 QB 118; R i> Chief constable of Sussex ex. P International Trader’s Ferry Ltd. [1999] 2 AC 418). Conversely, discretion must not be fettered unlawfully, by, for instance, failing to take a relevant factor into account when making a decision, such as, by only considering factors that may indicate risk or potential for harm rather than those that might indicate the opposite.

Algorithms have the potential to package relevant factors in a way that could facilitate more efficient decision-making (Babuta, Oswald and Rinik, 2018), contributing to the identification of the factors most relevant to the decision at hand. These tools present a number of threats, however, to legitimate discretionary decision-making. Unnuanced risk scores have been demonstrated to be highly influential on human decision-makers (Cooke, 2010).Their‘binary nature’ may even eliminate any discretionary power to deal with the ‘hard cases’ (Bayamhoglu and Leenes,2018). A significant issue with categorising risk using whole numbers is that this method treats nominal variables as if they were scale, implying some form of objective assessment (Heaton, Byrant and Tong, 2019), or indeed to conclude that someone categorised as ‘low risk’ needs no help or intervention for their particular circumstances.

Police forces that have implemented predictive algorithms have stressed that such tools are being used in a way that ‘supports’ and ‘enhances’, rather than replaces, professional judgement (Durham Constabulary, 2017; Oswald, Grace, Urwin and Barnes, 2018). In its Authorised Professional Practice on Risk, the College of Policing likewise notes that “RI [risk identification], RA [risk assessment] and RM [risk management] tools should be regarded as an excellent but limited, means of improving the likelihood of identifying and preventing future offending or victimisation. They can enhance professional judgement but not replace it” (College of Policing Authorised Professional Practice). Nevertheless, a statistical prediction may have a significantly prejudicial effect on the human decision-making process. As Cooke and Michie point outfit is difficult for the decision-maker to disregard the number and alter their evaluation even if presented with detailed, credible and contradictory information’ (Cooke and Michie, 2013).

The way that officers react to algorithmic outputs, and whether they will be prepared to override algorithmic recommendations with their own judgement, may depend to a large extent on the force’s attitude to risk and the extent to which individual officers are held responsible for the consequences of alleged omissions and the criticisms made with the benefit of hindsight (Heaton,

Bryant and Tong, 2019). Dencik et al.’s case study of Avon and Somerset polices Qlik tool highlights police officers’ frustration that the tool initially generated scores that were contrary to their own knowledge and judgement of the individuals concerned (Dencik et al., 2018).This resulted in further development of the tool in terms of data inputs and use of relevant intelligence that remained uncodified: “that breakdown in the relationship isn’t going to go into Qlik Sense because it’s not a crime, it’s an intelligence report and Qlik Sense doesn’t pick up intelligence. So we were quite frustrated by that at the beginning” (Avon and Somerset inspector quoted in Dencik et ah, 2018). Concern was also expressed that too much importance was attached to the tool, resulting in nervousness about the ‘defenceability’ of taking action contrary to the algorithmic recommendation (Avon and Somerset inspector quoted in Dencik et ah, 2018).

Beyond assessing the relevance and importance of factors which may or may not be coded into a statistical model, officer discretion is also crucial when deciding what further action will be taken on the basis of the risk assessment or forecast. The College of Policing notes that statistical prediction “is recognised as more accurate than unstructured judgement, but is inflexible and blind to specific contexts” (College of Policing Authorised Professional Practice). A numerical ‘risk score’ provides the decision-maker with no insight into the specific nature or causes of risk, nor guidance as to what intervention measures can be taken to address the risk (Cooke and Michie, 2013).The third principle of the APP on ‘Risk’ states that “[r]isk taking involves judgement and balance. Decision makers are required to consider the value and likelihood of the possible benefits of a particular decision against the seriousness and likelihood of the possible harms” (College of Policing Authorised Professional Practice). It follows that the soundness and fairness of an officer’s decision-making is judged largely on whether they have considered the relative potential benefits and harms of different outcomes. Such a risk-benefit analysis may be highly context-specific and subjective, requiring careful consideration of a range of possible scenarios, including their likelihood and severity.

The ability to assess ‘un-thought of’ and uncodified relevant factors as part of the decision-making process must be preserved if discretion is to be applied appropriately. We have previously argued that AI and machine learning tools should not be inserted into a process that requires the exercise of discretion where the tool prevents that discretion; either because all of the factors relevant to the decision cannot be included, or required elements of the decision itself cannot be appropriately codified into, or by, the algorithm (Oswald, 2018). Use of an algorithmic tool should similarly not prevent the consideration of a range of different potential interventions or measures that can be taken to reduce any identified risk. Furthermore, as Lynskey points out in connection with the prohibition on automated decision-making in Article 11 of the Law Enforcement Directive and the question of adverse effect, much “depends on how the decision-making process occurs in practice. In this context, one would need to gauge to what extent the final decision entails the discretion and judgment of the officer making that decision” (Lynskey, 2019). Practical considerations, in particular design of the human-computer interface, the avoidance of unnuanced framing of results (such as ‘traffic-lighting’ of risk levels), and organisational culture and processes, will be crucial to these issues.

 
Source
< Prev   CONTENTS   Source   Next >