Insurance and the legal challenges of automated decisions: An EU perspective

Paola Manes

From Big Data to automated decisions

Data management and new technologies

Over the past two decades, data has, to a large extent, increased in several business sectors, such as insurance and financial services (just to name a few).[1]

Although data has been analysed for millennia, the rise of so-called ‘Big Data’ has led to the emergence of a specific discipline that studies how to analyse large datasets and turn them into meaningful information that we can make use of, for example, to make predictions: ‘Data Science’.

Data Science can be considered as an amalgamation of classical disciplines such as statistics, Artificial Intelligence (Al), mathematics and computer science; it combines existing approaches with the aim of turning abundantly available data into value for individuals, organisations and society.[2]

Today, just like, in order to obtain a small amount of precious material, a large volume of soil and raw material needs to be extracted from a mine, large volumes of data can be processed to construct a simple model with valuable use (eg having high predictive accuracy).

The concept of ‘Data Mining’ intertwines with that of ‘Machine Learning’ (ML). Although over time, the difference between the two has become less relevant and boundaries are beginning to blur, rather simplistically, Data Mining aims at finding knowledge from data, while ML aims at ‘teaching’ a machine how to do it.

In particular, the core aim of Machine Learning is making inference from example data or experience, using the theory of statistics in building mathematical models; ML methods today are thus playing an increasingly important role in data analysis, as they can deal with massive amounts of data.

ML studies algorithms that can learn from data to obtain knowledge from experience and to generate decisions and predictions; in its most basic form, an algorithm can be described as a set of instructions or rules given to a computer to follow and implement.

The ability of algorithms to identify, select and determine information of relevance beyond the scope of human decision-making creates a new kind of decision optimisation; the level of accuracy of decisions, however, depends both on the design of the algorithm itself and on the data it is based on.

  • [1] From the rich literature on this topic, see eg, Phil Simon, Too Big to Ignore. The Business Case for Big Data (Wiley 2013) 7; Viktor Mayer-Schónberger and Kenneth Cukier, Big Data: A Revolution that Will Transform How We Live, Work, and Think (Houghton Mifflin Harcourt 2013) 6-7; International Association oflnsurance Supervisors (IAIS), FinTech Developments in the Insurance Industry (2017) 32 (Call-off date for all hyperlinks, unless stated otherwise: 30 June 2020); Giovanni Comandé, ‘Regulating Algorithms’ Regulation? First Ethico-Legal Principles, Problems, and Opportunities of' Algorithms’ in Tania Cerquitelli and others (eds), Transparent Data Mining for Big and Small Data (Springer 2017) 169; Antonella Cappiello, Technology and the Insurance Industry: Re-configuring the Competitive Landscape (Springer 2018) 9-10; Davide Mula, ‘Big Data vs Data Privacy’ in Giusella Finocchiaro and Valeria Falce (eds), Fintech: diritti, concorrenza, regale. Lc operazioni difinanziamento tecnológico (Zanichelli Editore 2019) 355-356; Marco Delmastro and Antonio Nicita, Big Data. Come stanno cambiando il nostro mondo (Il Mulino 2019) 14; Viktoria Chatzara, ‘FinTech, InsurTech, and the Regulators’ in Pierpaolo Marano and Kyriaki Noussia (eds), InsurTech: A Legal and Regulatory View (Springer 2020) 3; The Economist, ‘An understanding of Al’s limitations is starting to sink in’ (2020) Technology Quarterly . 2 The term ‘Big Data’ was introduced under the massive increase of global data. Initially the idea behind ‘Big Data’ was that the volume of information had grown so large that the quantity being examined no longer fit into the memory that computers use for processing, so engineers needed new tools for analysing it all; that is the origin of new processing technologies that let one manage far larger quantities of' data than ever before. See eg, Mayer-Schónberger and Cukier (n 1) 6-7. 3 See eg, Mike Loukides, What is Data Science? (O’Reilly Media 2011) 2, who explains that 'data scientists are involved with gathering data, massaging it into a tractable form, making it tell its story, and presenting that story to others'. See also Nils J Nilsson, The Quest for Artificial Intelligence. A History of Ideas and Achievements (Cambridge University Press 2010) 398; Mayer-Schónberger and Cukier (n 1) 6-7.
  • [2] See eg, Nilsson (n 3) 398; Jerry Kaplan, Artificial Intelligence. What Everyone Needs to Know (OUP 2016) 12; Arvind Narayanan and Dillon Reisman, ‘The Princeton Web Transparency and Accountability Project’ in Tania Cerquitelli and others (eds), Transparent Data Mining for Big and Small Data (Springer 2017) 46. 2 Ethem Alpaydin, Introduction to Machine Learning (3rd edn, MIT Press 2014) 3. 3 See eg, Toon Calders and Bart Custers, ‘What Is Data Mining and How Does It Work?’ in Bart Custers and others (eds), Discrimination and Privacy in the Information Society. Data Mining and Profiling in Large Databases (Springer 2013) 29. 4 See eg, Narayanan and Reisman (n 4) 46. 5 See eg, Calders and Custers (n 6) 27; Kaplan (n 4) 12; Narayanan and Reisman (n 4) 46. 6 From the rich literature on this topic, see eg, Alpaydin (n 5) 1; Bruno Lepri and others, ‘The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good’ in Tania Cerquitelli and others (eds), Transparent Data Mining for Big and Small Data (Springer 2017) 6-7. 7 See eg, Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Penguin 2015) 6. 8 See eg, Nilsson (n 3) 495; Alpaydin (n 5) 3. 9 See eg, Nilsson (n 3) 495; Simon (n 1) 46; Delmastro and Nicita (n 1) 8.
 
Source
< Prev   CONTENTS   Source   Next >