The Enmeshment of the Machine in Complex Governance Processes and Networks

Lasse Gerrits, Elizabeth Anne Eppel, and Mary Lee Rhodes


Chapters 11 and 12 in this collection provided an overview ot the state ot governance theories and its blending with complexity theory, as well as empirical examples ot innovations such as alternative governance arrangements. This chapter provides an outlook on the challenges of this hybrid field ot inquiry, focusing on the attempts of both Public Administration as a discipline and public administration as a sector to uncover the complexity of governance as described in the previous chapters.

Central to this search will be the tension between an-ever increased awareness of the complexity inside and outside government on the one hand, and a near-universal, inevitable desire for simplicity on the other hand. The advent of the digital age — in the shape of algorithms and software running on increasingly powerful computers and that process many types of societal data to deliver the information upon which decisions are made — accelerates the evolution of the networks governmental and non-governmental actors are entangled in. The idea that the computer can be a machine actor in enmeshed in a network ot actors is nothing new of course; Actor-Network Theory had that covered decades ago (Latour, 2009). What has changed, however, is that this digital machine, its hardware and software, has gained more autonomy, which is intimately connected with the tension between complexity and simplicity mentioned above.

What follows will first reflect on the intellectual development in our field to highlight that the theoretical and conceptual development is not necessarily one ot continuous improvement in an absolute sense. Instead, it demonstrates a moving back and forth between relatively simple theories and concepts broadly applicable and those more finely grained with more explanatory value for a smaller universe. Next, this will be related to the advent of digital technology, such as big data, machine learning, and the use ot algorithms. The remainder ot the chapter explores how these relate to public administration as a sector and a discipline. As it stands, the field has only started to scratch the surface of this novel societal development. Using insights from various other fields, this chapter provides ample ideas for future research.

Evolution of Governance Network Theories

Public Administration has been through an intellectual evolution that started with the borrowing of simple explanatory models from other disciplines and has arrived in a place where its models are arguably internal to the discipline, as Eppel et al.’s discussion in Chapter 11 shows. The arrival of governance network theories and the ways in which they are captured in explanatory concepts and models have been instrumental in cementing Public Administration as a full-fledged discipline. The models became increasingly detailed and granular as more research uncovered more factors to account for and explain. Comparing, for example, iron triangles with current iterations of network models (Klijn & Koppenjan, 2014, 2015; Klijn & Snellen, 2009) shows how tar the discipline has come with regard to the level of detail that those models can render.

The history ot thought about governance networks has been told so often that there is no need to repeat it. Does this intellectual development signal an improved understanding of the world? It is tempting to equate the evolution ot those models and concepts with knowledge accumulation of what makes governments tick. In other words: to believe that those more detailed models and concepts are a sign of an increasingly detailed understanding of the object of interest. The actual picture is more complicated. Two factors need to be considered. First, politicians and administrators change their operations in the face of societal dynamics, so the object of interest resembles something of a moving target. Second, there is an important difference between the appearances of governance and underlying mechanisms. For example, the ongoing liberalization of the European market for public transportation (EC directives 91/440/EEC and 2012/34/ EU) resulted in new governance networks at the European level, e.g. in the shape of working groups (Egeberg & Trondal, 2017; Gilardi, 2002; Majone, 1997), but the mechanisms underneath liberalization are well-known: devolution ot tasks, establishing of principal-agent relationships, introducing key performance indicators and the use of specific contract forms, etc. (Schipper & Gerrits, 2018). Thus, models and concepts that explain liberalization in the European railway market because of EU policies tap into a different layer of reality than models and concepts that map and explain the basic mechanisms underneath this particular manifestation of liberalization.

The relationship between these two factors on the one hand, and concepts and models ot governance on the other, is mediated through the movement between conceptual intension (the attributes an object must feature to fit the concept) and extension (the class of objects referred to; Toshkov, 2016). If the concept of governance networks has been given the attributes ‘multi-actor’ and ‘lack of hierarchy’, it would cover a wide range of cases with these properties but would also ignore some finer-grained differences between such cases. Conversely, if one changes the intension by adding more attributes accounting for those differences, the range of cases covered will be more limited. It would allow one to generate more precise empirical statements, but the statements derived hold true for a smaller set of cases. The more abstract the concept, the greater its extension (Goertz, 2005; Toshkov, 2016). The intellectual development ot concepts and models of governance and complexity is therefore not necessarily a linear path from a crude understanding of reality towards better explanations and ot knowledge accumulation, but rather a movement across a spectrum between simple models and concepts fitting a wide range of cases, and complex models and concepts fitting fewer cases (Boisot, 1998).

The rapid diffusion and use of the governance networks in scholarly research thrive on its broad intension, i.e. its ability to cover many instances. As such, it can be deployed in a wide range of settings such as the analysis of Dutch social housing policy (Klijn, 1996) and the analysis of regional planning processes in the United States (Koliba, Meek, & Zia, 2010), despite major differences between the two topics and the surrounding context. The same goes for applications of complexity theory, as recounted in this Domain of the book. Many aspects of complexity theory seem general enough to fit a wide range ot realities.

The trade-off between intension and extension is particular important for understanding what the entire set of concepts and models of governance and complexity does in intellectual and practical terms. While some concepts and models are broad, others can be very specific indeed. This increases their appeal but also creates considerable contusion about the object of interest — it is not hard to imagine scholars talking past each other if the conceptual intensions is broad enough. For example, the bibliometric study on the use of the concept of self-organization and self-governance in urban and spatial planning shows that there are at least five main conceptualizations in use that are at least partially mutually exclusive (de Bruijn & Gerrits, 2018). When applied to 38 different topics, there is little resemblance among them. Authors do refer to each other’s work but if is often unclear if they understand the concept in the same way as others (ibid.). There are some indications that they talk from different plains.

The study by de Bruijn and Gerrits (2018) illustrates what has happened in the realm of governance network and complexity theories: wide adoption and popularity, but a very amorphous field of inquiry. The ongoing development in Public Administration thrives on the ways in which scholars negate the tension between complexity and simplicity: to have access to concepts and models simple and convenient to use, but which also address the complexity inherent to social reality. There is not necessarily a ‘next’ in the sense of an entirely new and better theory or model for governance and complexity (all the basic mechanisms have been covered), but rather, further refinement of existing concepts, models, theories and methods to better probe into how those mechanisms interact with specific contexts, and how certain outcomes come about.1

Further Unboxing

Some have argued that the world has become an increasingly complex one that requires specific approaches that account tor that complexity (Gerrits, 2012; Morcol, 2002, 2012; Sharkansky, 2002). The world may not be more complex in an absolute sense — for example, figuring out how to contain the Black Death in medieval Eurasia must have been a very complex challenge — but we have become better at uncovering social complexities. The more we see, the more we realize all that we do not know yet. A major factor affecting the experience of increased complexity comes in the shape of digitization — a novel societal dynamic. Digitization of data and methods simultaneously offers an unprecedented access to information to guide governmental action and make sense of a daunting overabundance of data. Big data, algorithms and machine learning seem to offer administrators a way out of the social complexity, but also add to it by introducing new technologies.

For scholars in our field, digitization promises a similar push. It may give them an overview and insight otherwise not possible. For example, scholars can now download, structure and transverse the entire corpus of policy papers concerning deregulations as issued by the European Union. Taken at face value, digitization indeed offers new ways of doing things and of understanding complexity, but it should not be taken at face value. The inner workings of the machine should be understood before we consider their usefulness for decision-making and tor the analysing how governance networks shape decisions under complex conditions. We should move beyond the interface through which data and knowledge is presented (Burkholder, 1992). The next section dissects the machine to understand how it operates and the ways in which it is gaining agency in governance theory and practice and in research.


Digitization is a broad term that captures a wide range of techniques for data processing and representation. What we are particularly interested in are the current and future technologies geared towards sorting, relating and predicting social complexity. After all, this may be the avenue worth pursuing for policy-makers and scholars in the field. To this end, we will need to discuss the main properties of big data, algorithms, and machine learning. In practice, there appears to be some confusion (and some hype) about what these three are and how they relate (boyd & Crawford, 2012; Manovich, 2012). Ot the three, big data may be the most loosely defined.

In general, big data concerns data characterized by its unsorted diversity as well as granular diversity. While traditional research may focus on key data as selected based on prior theory or knowledge and as defined in variables, this differentiation is missing from big data. In such large, diverse and unstructured data sets, each utterance is, each bit of information forms a variable (Mackenzie, 2015). The key to work with this daunting abundance of information is categorization, i.e. the sorting and labelling of every bit of data such that those bits can be related in one way or the other. Since every bit of data is a variable, the entire data set forms a very-high dimensional space where countless bits of data are related to other countless bits of data. The data set can be dynamic, too: new data may enter or leave the set continuously. How these data as variables relate will emerge once sufficient data have been collected and labelled — which is why such data sets tend to be enormous.2 Naturally, it is considered impossible to sort those data manually and to discern patterns that matter.

This is where algorithms and machine learning come into play. Some machines sort data based on predefined criteria. However, machine learning enables the machine to develop categories and labels on its own, as such actively sorting and relating data without much instruction as how this should be done exactly. The basic principle of machine learning constitutes a positive feedback loop. The data are labelled and related, and the outcomes are then tested to see if the sorting has made sense. It not, the data will be related repeatedly until it starts to approach reality. Once the resulting predictions are confirmed, the machine will be better able to sort new data sets. In other words: the more a machine knows, the more it can know, i.e. generalization through mobilization (Mackenzie, 2015). Feeding more data will improve the capacity of the machine to learn and to get better at sorting data and predicting outcomes.

Machine learning runs on a collection of known and tested statistical techniques to do the labelling and sorting. The apparent magic derives from the speed with which these enormous amounts of data are labelled, sorted, tested, and resorted and relabelled until they produce meaningful output. Even then, it is usually still impossible for human operators to track and trace how the machine traversed the highly dimensional data set and comes up with a given output (Latour, Jensen, Venturini, Grauwin, & Boullier, 2012; Mackenzie, 2015; Mittelstadt, Alio, Taddeo, Wachter, & Floridi, 2016). The best way of telling that the machine has learned is by looking at its ability to generalize (Burrell, 2016). The central issues with generalization are two-fold. First, the resulting model may adapt itself too closely to the current data set and subsequently tails to generalize (excessive fit) or may not be complex enough, subsequently representing too little and performing poorly in generalization (underfit). Second, the learning works well as long as the object it is learning about remains more or less static. A static object allows the machine to fine- tune its model and become increasingly good at predicting the output. However, every change in the object of interest requires a new iteration and a change of model. By implication, machine learning has a hard time keeping up with the complexity of social reality because it is often more changeable than not (Mackenzie, 2017).

Algorithms are nothing but if-then rules — there is no magic at play — but there can be many, and they can be combined. Many decisions in non-digitized research can be considered algorithmic, too, for example, when all instances having a set of attributes are considered to fall under the scope of a particular concept. When it comes to digital algorithms, a principal distinction can be made between reactive systems, i.e. algorithms that trigger an automated response; and pre-emptive systems, i.e. algorithms that utilize historic data to infer predictions about future behaviour (Yeung, 2018). An example of the first would be a speed camera monitoring car drivers. Once someone drives faster than the pre-set limit, it will register that driver as an offender. An example of the second would be machine learning. Combinations of algorithms drive machine learning. Those algorithms can be relatively simple but stacked together they can render powerful outputs (Mackenzie, 2017). For example, a simple algorithm can be to label all instances of a particular word in communications as a possible indicator for social security fraud, and another one to check if those words correspond with actual fraud. Pitching this against other algorithms that label and sort the data in a different way and check their predictions against outcomes, the learning can be improved by keeping the best-performing algorithm and discarding the others (Salcedo-Sanz, Del Ser, Landa-Torres, Gil-Lopez, & Portilla-Figueras, 2014). In the realm of public administration, this could be the algorithm that performs the best in predicting fraud, crime or recidivism. More far-reaching cases, with real consequences, can be observed in, e.g. China. The Chinese state-run, ‘Situation Aware Public Security Evaluation (SAPE) platform records and maintains profdes of numerous individuals in an attempt to predict certain undesirable behaviours. In short, machine learning relies on algorithms. It actively selects and shapes the algorithms that work the best, i.e. it is capable of enhancing its own learning capacities.

Digitization and Its Consequences for Public Administration

The machine has already become a partner in decision-making. This is not just a reconceptualization in the vein ofLatour and colleagues (Latour, 1991, 2005; Venturini, Jacomy, Meunier, & Latour, 2017) in which one reframes the role of technology in social systems.3 Indeed, machines have previously taken on such an important role that policies are based on them. Early roles for the machine concerned the computing of input given by human operators, e.g. calculating the possible effects of a certain policy measure. This is the well-known role of computational decision support systems and is primarily in the hands of administrators and experts. The machine can be also be used to make improve the accessibility of its output, e.g. through visualizations, which then impact actual decision-making when presented to a wider public (Gerrits & Moody, 2011).

In the cases as described above, the role of the machine can be seen as essentially passive — it produces outputs exactly as it was told to produce. Algorithms are in place — otherwise there would be nothing to compute with — but machine learning and big data sets are not. Those two factors make the difference between a machine that produces a complicated, but essentially traceable output, and a machine that produces outputs not directly traceable for human operators. One could argue that such machines have obtained a higher degree of agency in the network because of their inner workings and because of the ways in which administrators and others rely on their output (see, e.g. Yeung, 2018; for an overview of applications). The effects of such machines are real. For example, the iterations in machine learning towards increased fit, so essential for the machine to work in the first place, may lead to a normalization of a situation because people act upon the recommendations, as such confirming the machine that the algorithm is indeed correct in its recommendations (Coglianese & Lehr, 2017). Ultimately, all machine learning is geared towards ordering, transforming, and shaping unstructured data in such a way that it can detect patterns that would neither be visible to the naked eye nor accessible through conventional statistical methods used in isolation with more limited data sets (Mackenzie, 2015). Some of the obvious errors can be corrected (e.g. prohibiting the machine to use the label ‘ethnicity’ when traversing crime statistics), provided that the human operator can be vigilant enough.

The keyword, then, is traceability (alternatively: followability; being intelligible). One can, and should, ask how machines arrive at their recommendation (Coglianese & Lehr, 2017) but this may be extremely complicated. The weak spot may not rest with the machine itself but how humans interact with machines. Even it the machine could share the reasons for its decision, there is no guarantee that human operators would understand. The problem is two-fold. First, we deal with machines that do not know how to print an intelligible, followable output suitable for the person requiring that information (Norman, 1989). This is already an issue when the machine relies on a crisp database (Beierle, Kern-Isberner, Bibel, & Kruse, 2003; Clancey, 1983; Puppe, Gappa,

Poeck, & Bamberger, 2013) but becomes even more complicated when the database is ambiguous and the information needs not clearly defined a priori (Mast, Falomir, & Wolter, 2016). In any case, the ex-post explanation is still an aggregate ot various algorithms so humans are unlikely to observe the machine working through each bit ot data. Second, there is ample evidence that humans perform poorly in the role of monitor. Getting the machine in the loop has the advantage of analysing heaps of unstructured data that cannot be processed by humans alone. The disadvantage is that it induces passivity because humans will no longer actively be involved in structuring data and creating outputs. Such passivity impacts awareness to such an extent that humans won’t comprehend the output even if was produced in a comprehensible way (Dixon & Wickens, 2006; Endsley, 1995, 1996) and, subsequently, the use ot the output. Moreover, information is irretrievably lost it no initial attention is paid (Peterson, 1985) and humans poorly processing complex information, regardless of how it is produced and presented (Gerrits, 2012).

In short, traceability is less effective if it pushes humans into a passive role. Both the machine and the human alone, as well as in interaction with each other, may introduce weaknesses in the decision-making process. While the machine has already become an actor in the governance network because of its autonomy in developing solutions from unsorted data, we are still a long way oft building a seamless mesh of humans and machines (Pantic, Pentland, Nijholt, & Huang, 2006).

Digitization and Its Consequences for Our Field

The possibilities and limitations — such as traceability and vigilance — described in the previous section apply equally to the role ot the machine in researching complexity and governance. Some scholars have announced the end of theory now that big data has arrived: if machine learners can sort and relate unstructured data sets, the analysis will not require any prior theories to make sense of the data. Essentially, the data will make sense of itself (Anderson, 2008), a thought echoed in our field (see e.g. Keast, Koliba, & Voets, 2019). However, this thinking requires some more in-depth investigation. To some extent, digitization in general, and machine learning in particular, can help identify patterns in the blink of an eye. Machines can process data volumes that individual researchers could not. Of course, there are operational concerns (e.g. data availability, data preparation, how to set up the machine, etc.) and ethical concerns (e.g. how to deal with consent).

Underneath those issues, however, are more fundamental concerns about the idea that big data will render theory obsolete. First, the premise is that available data fits the questions being asked. This is not necessarily the case. Any researcher will relate to the experience that good quality data is hard to come by. Second, it presumes that volume constitutes objectivity and will make up for potential lack of accuracy. This, again, is not the case. Big data are not necessarily better data, and bigger data sets do not necessarily constitute complete data sets (boyd & Crawford, 2012; for a critical discussion using an example of network reconstructions on the basis ot incomplete data harvested from Twitter). Third, data, ot any type, loses its meaning and explanatory value without contextual information (Gerrits & Verweij, 2013; Jopke & Gerrits, 2019). This abiding argument in social science research methods also applies to big data (boyd & Crawford, 2012). Big data and machine learning must deal with the same movement between intension and extension as discussed above, just as any other type of research has. Its reliance on sheer volume (broad intension) may render it less competent in the face of complex and detailed data — should that data be available and ready in the first place.

This now takes us back to the role and future of theories, models and concepts of governance networks and complexity. Big data and machine learning will neither render our prior knowledge obsolete nor herald the end of theories. Indeed, all those are pivotal tor developing a better understanding of what makes government tick. Contemporary big data and machine learning does not take place at the level of mechanisms, where it would have to identify causes and consequences. It is not said that (theoretically) it would be unable to do that but at the very least it would need guidance from the established theories in our field. Ot course, if appearances are all that are needed, this can be a great addition to the methodological toolkit.

This brings us back to how big data and machine learning cater to the desire for simplification. For fuller understanding of the complexity of governance networks, we may want to push in different directions. Both avenues have value; they just tap into different aspects of knowledge production. It offers new tools that shape not only the data, as discussed above, but also the questions we ask and the way we look at the world (boyd & Crawford, 2012). The positive feedback loop central to machine learning and the normalization of situations when machine learning is enacted in actual decision-making means that scholars are looking at a reality partially generated by the machine itself.

Conclusions and Reflections

This chapter started with the observation that the intellectual evolution in our field is not necessarily one ot steady knowledge accumulation with more recent insights being better than older insights. Rather, it signifies a movement between conceptual intension and extension as researchers try to mediate the tension between the simple and the complex. The question is not which theory or method will follow after governance network theories but instead how researchers will negotiate the granularity inherent to any complex system. We then focused on the advent of big data and machine learning as they become enmeshed in both actual governance networks and in research practices. The desire to use big data and machine learning to comprehend complex issues may stem from the desire for simplicity.

While there certainly is merit in this argument, there are also real limitations, as outlined above. As such, we would caution scholars about the possible hype that is digitization — indeed, some authors in our field appear too uncritical when it comes to what digitization may promise. Likewise, scholars should be aware of what the increased use of interventions based on these tools might have for the outcomes. The unboxing of complex governance networks is still very much in the hands ot researchers. Specifically we see the following questions across a range ot disciplines as having significant potential to carry governance research and practice in productive new areas:

Questions in the domain of information systems and machine learning, which could be studied in public administration contexts:

How data gets transformed when processed by the machine and how decisions are rendered; How that transformation compares to the way in which humans process data when making decisions;

How the machine can be taught to understand concepts, conceptual intension and extension in order to offset the theoretical void that machine learning (still) has.

Questions in the domain of public administration, but which will require significant drawing from information systems and data processing theory:

What are the key drivers leading to adoption of big data, algorithms and machine learning in public administration and in what areas are these technologies likely to be found;

To what extent does deploying of these technologies change (or not) the human / institutional processes of government.


  • 1 This is a realist viewpoint (Archer, 1998; Bhaskar, 2008; Elder-Vass, 2004, 200Б; Gerrits & Verweij, 2018). In this ontology, reality is stratified in three layers: the real, the actual and the empirical (Bhaskar, 2008). The real concerns all the mechanisms in society, the actual concerns those mechanisms that are activated or actualized because of certain conditions that trigger it, and the empirical concerns the personal experience of observing that actualized reality (Gerrits, 2021). The quest is to uncover under what conditions certain mechanisms are actualized (Gerrits & Verweij, 2013; Pawson, 2006). At the very least, scholars should be clear about the level at which their work takes place. This is often left out of discussions.
  • 2 Mackenzie also pays attention to the transformation that data undergoes when digitized: it becomes encoded in bits. This encoding is essential for the operation of the machine but it does dichotomize data at a micro level. Boisot points at a similar problem in public administration where the complexity of real problems is broken down into dichotomies that are easier to handle bureaucratically, but also violate the actual complexity of those issues (Boisot, 2000, 2006; Boisot & Child, 1988).
  • 3 Although it is entirely relevant and should be acknowledged more often in literature on complexity and governance.


Anderson, C. 2008. “The end of theory: The data deluge makes the scientific method obsolete.” Wired. Retrieved June 23, from

Archer, M. S. 1998. Critical realism: Essential readings. London: Routledge.

Beierle, C., Kern-Isberner, G., Bibel, W., & Kruse, R. 2003. Methoden wissensbasierter Systeme: Grundlagen, Algorithmen, Anwendungen (2., iiberarb. u. erw. Aufl. 2003). Wiesbaden: Vieweg+Teubner Verlag.

Bhaskar, R. 2008. A realist theory of science. New York: Routledge.

Boisot, M. 2000. “Is there a complexity beyond the reach of strategy?” Emergence, 2(1), 114—134.

Boisot, M. 1998. Knowledge assets. Oxford: Oxford University Press

Boisot, M. 2006. “Moving to the edge of chaos: Bureaucracy, IT and the challenge of complexity.” Journal of Information Technology, 27(4), 239—248.

Boisot, M., & Child, J. 1988. “The iron law of fiefs: Bureaucratic failure and the problem of governance in the Chinese Economic Reforms.” Administrative Science Quarterly, 33(4), 507. https://doi. org/10.2307/2392641.

boyd, danah, & Crawford, K. 2012. “Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon.” Information, Communication & Society, 75(5), 662—679.


Burkholder, L. 1992. Philosophy and the computer (1 edition). Boulder, CO: Westview Press.

Burrell, J. 2016. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big

Data & Society, 3(1), 205395171562251.

Clancey, W. J. 1983. “The epistemology of a rule-based expert system —A framework for explanation.” Artificial Intelligence, 20(3), 215—251.

Coglianese, C., & Lehr, D. 2017. “Regulating by robot: Administrative decision making in the machine-learning era.” The Georgetown Law Journal, 105, 78.

de Bruijn, E., & Gerrits, L. 2018. “Epistemic communities in urban self-organization: A systematic review and assessment .’’Journal of Planning Literature, 088541221879408.

Dixon, S. R., & Wickens, C. D. 2006. “Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence in high workload.” Human Factors: The Journal of the Human Factors and Ergonomics Society, 48(3), 474—486.

Egeberg, M., & Trondal, J. 2017. “Researching European Union Agencies: What have we learnt (and where do we go from here)?” JCMS: Journal of Common Market Studies, 55(4), 675—690. jcms.12525.

Elder-Vass, D. 2004. “Re-examining Bhaskar’s three ontological domains: The lessons from emergence.” In C. Lawson, J. Latsis, & N. Martins (Eds.), Contributions to social ontology (pp. 15—160). New York: Routledge.

Elder-Vass, D. 2005. “Emergence and the realist account of cause.” Journal of Critical Realism, 4(2), 315—338.

Endsley, M. R. 1995. “Toward a theory of situation awareness in dynamic systems.” Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32—64.

Endsley, M. R. 1996. “Automation and situation awareness.” In Human factors in transportation. Automation and human performance: Theory and applications (pp. 163—181). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Gerrits, L. 2012. Punching clouds: An introduction to the complexity of public decision-making. Litchfield, AZ: Emergent Publications.

Gerrits, L. (2021). “Soul of a new machine: Self-learning algorithms in public administration”. Information Polity.

Gerrits, L., & Moody, R. 2011. “Envisaging futures: An analysis of the use of computational models in complex public decision making processes.” In K. Richardson, & A. Tait (Eds.), Applications in complexity science: Tools for managing complex socio-technical systems. Litchfield, AZ: Emergent Publications.

Gerrits, L., & Verweij, S. 2013. “Critical realism as a metaframework for understanding the relationships between complexity and qualitative comparative analysis.” Journal of Critical Realism, /2(2), 166—182.

Gerrits, L., & Verweij, S. 2018. The evaluation of complex infrastructure projects. Cheltenham: Edward Elgar Publishing,

Gilardi, F. 2002. “Policy credibility and delegation to independent regulatory agencies: A comparative empirical analysis.” Journal of European Public Policy, 9(6), 873—893. 2000046409.

Goertz, G. 2005. Social science concepts. Princeton, NJ: Princeton University Press

Jopke, N., & Gerrits, L. 2019. “Constructing cases and conditions in QCA — lessons from grounded theory.” International Journal of Social Research Methodology, 1—12.

Keast, R., Kohba, C. J., & Voets, J. 2019. Expanded Research Pathways, Emerging Methodological Opportunities and Responsibilities. Presented at the International Research Society for Public Management, Wellington.

Klijn, E. H. 1996. Regels en sturing in netwerken: De invloed van netwerkregels op de herstructurering van naoorlogse wijken. Delft: Eburon.

Klijn, E. H., & Koppenjan, J. 2014. “Complexity in governance network theory ” Complexity, Governance & Networks, /(1), 61—70.

Klijn, E. H., & Koppenjan, J. 2015. Governance networks in the public sector. London: Routledge.

Klijn, E. H., & Snellen, I. 2009. “Complexity theory and public administration.” In G.R. Teisman, A. Van Buuren, & L. Gerrits (Eds.), Managing complex governance systems: Dynamics, self-organization and coevolution in public investments (pp. 17—36). New York: Routledge.

Koliba, C., Meek, J. W., & Zia, A. 2010. Governance networks in public administration and public policy. Boca Raton, FL: CRC Press, Inc.

Latour, B. 1991. “Technology is society made durable.” InJ. Law (Ed.), A sociology of monsters: Essays on power, technology, and domination (p. 273). New York: Routledge.

Latour, B. 2005. Reassembling the social: An introduction to actor-network- theory. Oxford: Oxford University Press.

Latour, B. 2009. “Tarde’s idea of quantification.” In M. Candea (Ed.), The social after Gabriel tarde: Debates and assessments (pp. 145—162). London: Routledge.

Latour, B., Jensen, P., Venturini, T., Grauwin, S., & Boullier, D. 2012. “The whole is always smaller than its parts’ — a digital test of Gabriel Tardes’ monads.” The British Journal of Sociology, 63(4), 590—615. https://

Mackenzie, A. 2015. “The production of prediction: What does machine learning want?” European Journal of Cultural Studies, Щ4-5), 429-445.

Mackenzie, A. 2017. Machine learners. Cambridge, MA: The MIT Press.

Majone, G. 1997. “The agency model: The growth of regulation and regulatory institutions in the European Union ” EIPASCOPE, 1997, 1-6.

Manovich, L. 2012. “Trending: The promises and the challenges of big social data.” In M. K. Gold (Ed.), Debates in the digital humanities (pp. 460—475).

Mast, V., Falomir, Z., & Wolter, D. 2016. “Probabilistic reference and grounding with PRAGR for dialogues with robots.” Journal of Experimental & Theoretical Artificial Intelligence, 28(b), 889—911. https://doi. org/10.1080/0952813X. 2016.1154611.

Mittelstadt, B. D., Alio, R, Taddeo, M., Wachter, S., & Floridi, L. 2016. “The ethics of algorithms: Mapping the debate.” Big Data & Society, 3(2), 205395171667967.

Morcol, G. 2002. A new mind for policy analysis: Toward a postnewtonian and postpositivist epistemology and methodology. Westport, CT: Praeger Publishers.

Morcol, G. 2012. A complexity theory for public policy. New York: Routledge.

Norman, A. 1989. Inappropriate Eeedback and Interaction, Not “Overautomation” (No. 8904; p. 14). San Diego: Institute for Cognitive Science University of California.

Pantic, M., Pentland, A., Nijholt, A., & Huang, T. 2006. Machine tuiderstanding of human behavior. Proceedings of the ACM international conference on Multimodal Interface, 12.

Pawson, R. 2006. Evidence-based policy: A realist perspective. London: Sage.

Peterson, S. A. 1985. “Neurophysiology, cognition, and political thinking.” Political Psychology, 6(3), 495-518.

Puppe, F., Gappa, U., Poeck, K., & Bamberger, S. 2013. Wissensbasierte Diagnose- und Informationssysteme: Mit Anwendungen des Expertcnsystcm-Shell-Baukastcns. Berlin: Springer-Verlag.

Salcedo-Sanz, S., Del Ser, J., Landa-Torres, I., Gil-Lopez, S., & Portilla-Figueras, J. A. 2014. “The coral reefs optimization algorithm: A novel metaheuristic for efficiently solving optimization problems.” The Scientific World Journal,

Schipper, D., & Gerrits, L. 2018. “Differences and similarities in European railway disruption management practices.” Journal of Rail Transport Planning & Management, £(1), 42—55. https://doi.Org/10.1016/j. jrtpm.2017.12.003.

Sharkansky, I. 2002. Politics and policymaking: In search of simplicity. Boulder, CO: Lynne Rienner Pub.

Tosh ко v, D. 2016. Research design in political science. London: Macmillan International Higher Education.

Venturing T., Jacomy, M., Meunier, A., & Latour, B. 2017. “An unexpected journey: A few lessons from sciences Po medialab’s experience.” Big Data & Society, 4(2), 205395171772094. https://doi. org/10.1177/2053951717720949.

Yeung, K. 2018. “Algorithmic regulation: A critical interrogation: Algorithmic regulation.” Regulation & Governance, 12(4), 505—523.

< Prev   CONTENTS   Source   Next >