WHY THINK ABOUT COMPLEX SYSTEMS IN ONE HEALTH?

Our current methods and tools are not well equipped to integrate information that span environment, animal, and social and health concerns, making it hard to “see” an integrative way, or to benefit from the full range of knowledge and experience relevant for creating healthier circumstances. Many of the models, perspectives, and methods used to study human-animal-environmental health systems were created to answer questions about the health of individuals or groups of individuals. Berezowski et al. discuss this challenge from a surveillance perspective in Chapter 7. In the literature, it is more common to find metaphors and analogies adapted from the natural or social sciences that justify the use of complexity and chaos thinking in health sciences than it is to find evaluated cases of the value of this thinking in One Health.

The hope for complex systems thinking in health is that by considering the changing context, its key actors, and their interactions over time, we can more effectively understand and improve health (Rusoja et al., 2018). Better understanding of how multiple human and animal components interact in a non linear fashion to produce highly context-dependent outcomes could, for example, help identify situations predisposed to emerging diseases or find synergies to produce more efficient use of health resources for shared human, animal, and environmental health benefits. Understanding how the multiple determinants of a problem within a human-animal-environment system relate with each other and with the history of that system could help identify new ways to deploy health intelligence resources to find vulnerable situations in advance of harm (see Chapter 7 for details). Yet promises such as these have largely been untested and unfulfilled.

Complex systems thinking is compatible with several One Health aspirations. The focus on understanding boundaries of systems, exploring interrelationships, and seeing unique perspectives of a system from different people’s viewpoint could help define the scope of a One Health problem that a group wishes to tackle, identify common sets of indicators of success, and open group members’ eyes to opportunities to influence systems’ outcomes in ways they may not have from their own vantage point. Recognizing that a complex system cannot be fully understood from only one perspective can reveal the need for multiple collaborators to learn from each other to develop acceptable and effective One Health actions.

While the authors firmly believe that thinking about One Health from a systems perspective is an excellent teaching metaphor and a framework to plan One Health research and actions, we cannot provide the reader with published evidence of its utility, feasibility, and acceptability. What follows are two scenarios that illustrate the promise for complex systems thinking for One Health.

Scenario 1 - Emerging Diseases and Epidemics

The fact that the number of times the conditions conducive to epidemics exist without an epidemic occurring far outnumber the few occasions when one occurs (Stephen et al., 2004) supports the conclusion that epidemics are emergent phenomena of complex adaptive systems. The long and sometimes convoluted causal chains between upstream and downstream animal, human, and environmental determinants of disease emergences and our poor success rate at forecasting an emerging disease with sufficient precision to inspire local action call out for a new way to think about how we prepare for emerging diseases.

Ideally, one would like to solve a mathematical model that describes a system to the extent that we are able to exactly predict the state of a system in the future given its current state (i.e. predict precisely the combination of system attributes that lead to an emerging disease). The idea of studying or modelling emergence as a complex system phenomenon can seem an elusive if not impossible task. However, we need not always capture all underlying behaviours of a complex systems to understand the relevant characteristics. Complexity approaches may not provide the level of precision to provoke local biomedical interventions, but they have been used to “diagnose” when a system will change from one state to another. Instead of asking “can we predict which pathogen will emerge on which day, in which locations,” complex systems perspectives may help us ask “what are the circumstances that tip a system from being unconducive to conducive to an emerging disease, epidemic or pandemic?”

Epidemics as Critical Transitions

Epidemics occur in complex systems involving the interactions between reservoirs and hosts, exposure pathways and transmission rates, and environmental and social factors that are inherently interrelated and unpredictable (Wilcox and Colwell, 2005). Yet we can use a single number (R) to describe a situation wfhen a disease can spread in a population. R is the effective reproduction number, which is the average number of secondary infections resulting from a single infected case. R is affected not only by the nature of the pathogen but also by interacting host and environmental factors. The point R = l can be viewed as a transition point between the disease states: one where a population can sustain an epidemic (R >0) and one w'here the epidemic is likely to die out (R <0). In the study of dynamic systems, transitions between two regimes such as this are called critical transitions. Therefore, disease (re)emergence, epidemics, and elimination can be conceived as critical transitions (Drake et al., 2019).

Critical transitions are a fundamental aspect of non linear systems, described by the mathematical notion of bifurcation, in which more than one steady state exists for a system. If system parameters are tuned in a certain way, it is said to be at a tipping point, w'herein small perturbations to a system’s state or its parameters can trigger a cycle of positive feedback, causing the system to dramatically shift from one steady state to another. Critical transitions play an important role in all complex systems, including ecosystems, economics, and human biology (Scheffer, 2009).

Transition Early Warning Systems

How can we apply the immense body of understanding of critical transitions to the scenario of disease emergence and epidemics? It would be of great benefit to epidemic and pandemic prevention programmes to be able to characterize and diagnose systems that are vulnerable to critical transitions. A systems approach to early warning w'ould aim to detect the characteristic behaviour of systems near critical transitions using spatiotemporally ordered data (Boettigeret ah, 2013; Scheffer et ah, 2012). One such behaviour is knowrn as critical slowing down, w'hich describes the fact that near a tipping point, a system will recover more slowly from perturbations. This can be observed via the increase of autocorrelation and variation in time-series data of the relevant variable, such as the number of infected in the case of disease. A key advantage of early warning systems is their genericity: since they are based on general features of non linear dynamical systems, they do not require precise knowledge of the feedback loops that may trigger the transition. Even without a full understanding of the nonlinearities present in a system, there are diagnostics which can be used as early warning signals for such critical transitions (Scheffer et ah, 2012). However, different early warning systems may be better suited to different types of transitions, so their proper use can still be informed by such knowledge (Brett et al.. 2017; Boettiger et ah, 2013; Dakos et ah, 2015).

Transition early warning systems have been successfully applied in many fields, such as in ecology (Scheffer et ah, 2015), where the critical transition may involve species extinction or population regime shifts (including large-scale experimental verification in lake ecosystems; see Carpenter et ah,2011 for example); human health (Rikkert et ah, 2016), where transitions coincide with sudden events (Maturana et ah, 2020), such as seizures or heart attacks and long-term changes such as the onset of depression (van de Leemput et ah, 2014); and climate science, where, for example, ancient abrupt climate changes were shown to be preceded by critical slowing down (Dakos et ah, 2008). While the theory and application of transition early warning systems is successful in these fields, their presence and use in the context of emerging disease and epidemics are in their infancy. One important reason for this is the fundamental difference between the nature of disease emergence, in which the number of infected individuals grows continuously albeit explosively, versus more commonly studied critical transitions where the transition between two states is discontinuous, or “catastrophic” (Drake et ah, 2019). This difference means that different early warning systems are required; their theoretical development is just now occurring (Drake et ah, 2019; Boettiger et ah, 2013; Dakos et ah, 2015). Very recently, these ideas were applied to real-world data by Harris et ah, 2020, who studied the re-emergence of malaria in Kenya, triggered by the slow development of parasite resistance to treatment, and showed that it could be detected by early warning systems several years prior to the transition. Crucial to the successful application of transition early warning systems are consistent high-quality epidemiological data, which are needed to reduce statistical uncertainty to a point where early warning systems are reliable enough to motivate proactive behaviour (NRC 2001).

Adapting a transition early warning system approach would allow us to complement the typical “surveillance and response” approach to disease emergence, where emerging diseases are detected early and rapid intervention is used to prevent their spread, with the approach of “prediction and prevention,” where a system is monitored and characterized to determine susceptibility to outbreaks, and preventative action is taken before emergence (NRC 2001). While yet to be realized in practice, recent work is showing the theoretical possibility of analytically identifying critical transitions associated with disease elimination and emergence (e.g. O’Regan and Drake, 2013; Brett et al., 2017). Various works have moved these findings closer to implementation by showing that some approaches to early warning that use signals of critical transition were robust to imperfect epidemiological data (Brett et al., 2018), increasing model complexity and dimensionality (Brett et al., 2020), and the inclusion of social factors (Phillips et al., 2020).

Scenario 2 - Engineering Resilience and Fostering Change

One Health practitioners are often asked to identify actions to modify human, animal, and environmental relationships to foster resilience or promote a change that reduces vulnerability even when those relationships are dynamic, complex, and not fully accounted for. Complex systems thinking can provide new' tools to rise to these tasks.

Promoting Resilience

Tw'o main characteristics that determine a network’s resilience are diversity and connectivity (Brett et al„ 2017; O’Regan and Drake, 2013). A network w'hich is diverse and modular (connected primarily into smaller sub-networks) is resilient to critical transitions, as external perturbations can be compensated for by the greater variety of negative feedback loops provided by the diverse composition, and are less likely to propagate throughout the entire system due to the modularity. Such a network can gradually adapt in the face of external changes. A network which is highly connected and homogeneous, on the other hand, might initially have a rigidity that resists external perturbations. However, at a critical stress level, the components of the network may undergo a sudden collective change that propagates throughout the whole network, resulting in a critical transition.

Increasing diversity and modularity can help protect a complex system against critical transitions. This observation has a long history in ecology and has also been explored in economics (Scheffer et al., 2012; Haldane and May, 2011). Modularity is an important factor to prevent the spread of disease. The temporal dynamics of modules (such as communities) have been associated with epidemic processes (Nadini et ah, 2018). Sah et ah (2018), for example, concluded that "high fragmentation and high subgroup cohesion, which are both associated with high modularity in social networks, induce structural delay and trapping of infections that spread through [animal social] networks, reducing disease burden.” The effect of biodiversity is less clear. Morand and Lajaunie (2018) concluded that “empirical studies and often-correlative analyses show that biodiversity is a source of pathogens, but increases in epidemics and risks of emergence are associated with decreased biodiversity." Luis et ah (2018) found that biodiversity can dilute, amplify, or have no effect on zoonotic disease transmission and risk. Local factors, such as changes in habitat connectivity and edges, or access to health protection resources, can modify these relationships. General theories of the relationship between biodiversity and the risk of epidemic or pandemic diseases are awaiting.

Diversity has, however, been noted as a hallmark of resilient systems. For example, increased biodiversity on farms has been associated with increased resilience to extreme weather events associated with climate change (Altieri et ah, 2015). Another study found that marine protected areas were better able to meet their conservation goals w'here there was a diversity of interconnected incentives that arose from diverse institutional governance systems (Jones et ah, 2013). The functional redundancies that come w'ith species biodiversity have been investigated as a contributor to coral reef ecosystem resilience (Micheli et ah, 2014).

Network theory is being increasingly used to investigate how social diversity, connectivity, and modularity influence disease spread and control in human and animal networks (Pastor-Satorras et al., 2015). In health, it has been used to study disease transmission, information transmission, the influence of personal and social networks on health behaviour, and interorganizational structure of health systems. Wu et al. (2016), for example, examined how social learning networks influenced shrimp farmer disease control behaviour in Sri Lanka. Wittrock et al. (2019) used network analysis to visualize how experts conceived the interrelationships of multiple determinants of fish and wildlife health. Examination of social networks and contacts has been used to study the spread of disease in people, wildlife, and livestock (e.g. Noremark et al., 2011; Hamede et al.. 2009; Latkin et al., 2013).

Fostering Change

What are we trying to do when we launch a One Health intervention? From a simple systems point of view, we are asking person A to apply intervention В in situation C. For example, there is an infectious disease outbreak that is vaccine preventable, so we ask farmer A to use vaccine В in his susceptible animals on farm C. From a complexity perspective, what we are trying to do is initiate a time-limited series of events that interact with the social context of the system to change the trajectory of a socio-ecological system to a state that is not conducive to a specific rate of occurrence of an infectious disease. Implementing change is not straightforward in an unpredictable system.

Critical to scaling up or sustaining an intervention is understanding how the social and environmental characteristics and circumstances surrounding the implementation interact, influence, modify, facilitate, or constrain the intervention and its implementation (May et al., 2016). The Context and Implementation of Complex Interventions framework is an example of a tool that tries to reflect on systems attributes of the intervention with the space and context they take place in. in order to better understand whether and how interventions work (Pfadenhauer et al., 2017). Complexity thinking helped Braithwaite et al. (2018) identify that while change can be stimulated in many ways, a triggering mechanism is needed, such as legislation or widespread stakeholder agreement; that feedback loops are crucial to continue change momentum; that extended sweeps of time are involved, typically much longer than believed at the outset; and that taking a systems-informed, complexity approach, having regard for existing networks and socio-technical characteristics, is beneficial.

As another example, Leykum et al. (2007) showed how complex adaptive systems thinking helped plan more effective organizational interventions for type II diabetic management. Specifically, they found the ability of patients to modify practices based on forces internal and external to the clinical setting (co-evolution), and paying attention to interconnections affecting client communications had the strongest relationship with an intervention effect.

Implementing change in complex systems may not always have the intended effect. Sterman (2006) used the term "policy resistance” to refer to the “tendency for interventions to be defeated by the system’s response to the intervention itself.” For an example, consider hospital waiting times, which have been historically resistant to efforts to reduce them. One mechanism for this was given by Smethurst and Williams (2002), who showed that reduced waiting times may lead to an increase in referrals, such that the waiting times are unaffected in the end. In certain cases, the system’s response to an intervention can cause unintended negative effects. Efforts to eliminate all forest fires, for example, can cause the build-up of undergrowth and old or dead trees, thereby greatly increasing the risk of large-scale fires (Malamud et al„ 1998).

We can try to diagnose this kind of resistance to change by searching for patterns in observable data that are characteristic of complex systems. In the case of hospital waiting times, if one observes the quarter-to-quarter variations in waiting times, one finds that the frequency of occurrence N(x) of a variation of size x scales as a power law, N(x) ~ x-a (Papadopoulos et al., 2001; Smethurst and Williams, 2001). This means that as is the case for earthquakes and forest fires (Malamud et al., 1998), the distribution is “fat-tailed” in that variations of significant magnitude are more likely to occur than would be the case in a standard bell curve having a characteristic scale, like human height. This lack of a characteristic scale, or “scale invariance,” is often (but not solely) associated with systems in the vicinity of a critical transition, at “criticality” (Gisiger, 2001). This is a powerful, albeit controversial (Frigg, 2003), concept that may underlie many complex systems because systems at criticality show' an optimal balance between robustness and adaptability (Munoz, 2018). Because of this, implementing changes in such systems can have a much larger, or smaller, effect than intended, and caution must be exercised. On the other hand, it suggests that large-scale change can be possible w'ith relatively small intervention efforts (Fullilove et al., 1997). The practical consequence for the example of hospital waiting times is that occasional long times may be an inevitability, and points of evaluation and change should rather focus on, for example, quality of care (Smethurst and Williams, 2001).

Another example where scale invariance appears is epidemics in small, isolated populations of susceptible people, w'here outbreaks occur with dramatic variation of size and are separated by long periods of disease absence. Here, both the size and duration of epidemics can be fit to pow'er laws as above. This behaviour has been found in isolated island populations (Rhodes et al., 1997), outbreaks of cholera (Roy et al., 2014) and dengue fever (Saba et al., 2014), and measles cases in populations with declining vaccine usage (Jansen et al., 2003). Conventional epidemiological models are unable to capture this scaling behaviour. In fact, these types of epidemics are better described by forest fires models, which are paradigmatic examples of systems at criticality (Rhodes et al., 1997).

The apparent robustness of these scaling laws in such epidemic scenarios suggests that short-term interventions like treatment and vaccination programmes may not be effective in controlling and preventing the epidemics due to their self-regulating nature. Rather, one should focus on eliminating the conditions that enable the persistence of the disease. The existence of the power-law scaling gives some limited predictive power: it is possible to infer the frequency of large outbreaks from that of small outbreaks (Rhodes et ah, 1998) as is routinely done for earthquakes (Aki, 1981). It is also possible to estimate the distance to the critical transition by fitting the scaling exponent a in the power law (Jansen et ah, 2003; Roy et ah, 2014).

 
Source
< Prev   CONTENTS   Source   Next >