Process issues
Section 64 of the Data Protection Act 2018 provides that a data protection impact assessment is undertaken by a body considering deploying an APIAT before any processing of data of‘nominals’ takes place, and where “a type of processing is likely to result in a high risk to the rights and freedoms of individuals”. In essence, this might mean that an integrated impacts assessment process is required under the combined measures of the DPA 2018, the Human Rights Act 1998 and the Equality'Act 2010. (MOPAC produced an ‘integrated impact assessment’ (MOPAC, 2018b) in addition to its internal review of the operation of the Gangs Matrix over a five-year period, but this did not explicitly incorporate particular human rights as themes in the assessment process in the way that it did with ‘protected characteristics’ in it equality strand, although human rights-related values of protecting communities from harm while improving their cohesion and health were included.)
In terms of a requisite degree of public engagement and transparency over the inception, development and deployment of an APIAT, the public sector equality duty might require evidence gathering from communities that would disproportionately be engaged through the use of the planned APIAT, and certainly it is the view of notable barrister Timothy Pitt-Payne QC that human rights standards of ‘accessibility’ of legal information now require that there is public engagement over predictive policing issues, chiefly in the release of information notices to the public concerning the relevant APIAT (Pitt-Payne, 2018).There is an advantage to such an approach, in relation to what is known as the common law ‘duty to give reasons’, since in this context, the degree of information that has to be provided to those individuals affected by decisions informed by the APIAT should be underpinned, and rendered more clearly lawful, through the notification of communities about the use of the APIAT ‘up front’.
A vital consideration in the deployment and use of an APIAT is the extent to which there is automation of police operational decision-making following the onset of the use of the tool concerned, in a particular way. Sections 49 and 50 of the DPA 2018 between them give a particular set of safeguards in connection with a fully automated decisions that would have an impact on the rights of an individual in the criminal justice context — firstly a person is to be informed of the fully-automated decision about them, and secondly, should they then take this opportunity' to object, then the automated decision concerned will need to be re-made by a human officer and decision-maker. Automated decisionmaking is avoided through there being a ‘human in the loop’ - and so a careful and detailed process map of sorts might really help inform a future and more detailed stance by a police organisation on this issue, when determining the nature of a partial intervention with a ‘nominal’ individual based on a fully-, partly- or initially-automated decision that draws on the outputs of an APIAT.
Furthermore, a key issue in process terms is the way' that an APIAT is developed with a particular emphasis on the ‘trade-off-’ sought between two types of accuracy' that can be sought in the chosen predictive model: statistical sensitivity (which in extremis would make a tool as strong as possible at predicting high risk offenders, for example, but with a high ‘false positive’ rate down the line) or statistical specificity (an emphasis in the trade off- toward the desire to correctly sort low-, medium- and high-risk offenders, with a correspondingly' lower ‘false positive’ rate for its outputs — albeit with more ‘false negatives’, or high-risk offenders ‘missed’). With regard to the statutory bar on the processing of data in ways that is ‘misleading’, given the language of S.205 DPA 2018, a careful focus on statistical specificity over sensitivity is to be applauded, as it will clearly' in most instances be less ‘misleading’ to use a model weighted toward the former (Grace 2019).
Issues of human rights impacts
Issues of human rights impacts typically will boil down to the application of a ‘proportionality analysis’, in addressing the interference by the processing of data on an individual through a machine learning-based tool on Article 8 ECHR, that is to say, the right to respect for private life under the Convention. Given that the processing of most data by such a tool will be confidential data privy to criminal justice bodies and partner agencies, as opposed to SOCMINT, and so not readily in the public domain, these will be data that if processed, will typically give rise to a ‘reasonable expectation of privacy’ on the part of an individual or ‘nominal’. As such, UK common law in interpreting the Convention requires there is an overall fair balance “between the rights of the individual and the interests of the community” (as per R. (on the application of Quila) v Secretary of State for the Home Department [2011] UKSC 45). Following the now-standard approach from the UK courts to this proportionality analysis, what will help determine a truly ‘fair balance’ will be the extent of a rational basis for the processing overall, and whether the processing is ‘no more than necessary’ to achieve an objective of sufficient importance. Other ‘qualified’ rights under the ECHR, such as freedom of expression, will also instigate such a proportionality analysis if they are engaged; while the right to freedom from discrimination in the enjoyment of the right to respect for private life (engaged if the decision-making supported by an algorithm leads to indirectly discriminatory outcomes, for example) is violated should there be a pattern of discriminatory data governance practice that is manifestly without reasonable foundation (Grace, 2019; Raine, 2016).