Process and Outcome in Democracy Evaluations
In his book on democracy, Charles Tilly (2007) argued that “if we want insight into causes and effects of democratization or de-democratization, we have no choice but to recognize them as continuous processes rather than simple steps across a threshold in one direction or the other” (2007: 10). In this statement, Tilly identifies two approaches to the study of democracy. In the first one, democracy is a unitary system that exists in a certain (measurable) degree of completeness. Based on this approach, democracies are “full” or “partial” according to identifiable thresholds that often imply a teleological element in the definition of democracy itself. Tilly’s alternative, on the other hand, finds the measurement of threshold variables to be an insufficient means of representing the complex social processes of democratization and de-democratization. He argues that the mechanisms and processes of democracy have infinite (or at least very long-term) time horizons, and that these processes have not historically been consistent or predictable. Thus, for him, any attempt to construe democracy as a series of finite “outcomes” that develop in a linear way are not built on historical evidence and will likely lead to gross oversimplification.
The processual approach to democracy has come to represent something of a theoretical consensus in the social sciences,14 but for professionals working in democracy assistance organizations, such an approach has challenged the importance of what originally was the most tangible form of expertise associated with these organizations: election support and monitoring. While democracy promotion activities in the 1980s largely consisted of information-based work supporting and monitoring democratic elections, the field-wide focus on longer-term processes during the shift to assistance-based activities in the 1990s was a difficult transition for some organizations. As Carothers (1999) pointed out, “it [was] hard for organizations involved in election observing to avoid the allure of high-profile short-term observer missions as opposed to the slow, often unexciting work of covering an electoral process from start to finish” (1999: 131; see also Carothers 2004). Essentially, building institutional capacity and focusing on long-term processes of democratization required changes to the way organizations constructed their own expertise.
As the discourse of governance and the discursive posture of neoliberal institutionalism became more prevalent during the 2000s, the evaluation mechanisms that organizations used to represent the results or outcomes of their projects became more complex, and the subject of evaluation came to dominate debates within the field. For example, practitioners increasingly were compelled to balance different types of performance indicators based on both “extrinsic” and “intrinsic” forms of evaluation (Burnell 2007: 22).15 Here, the extrinsic value of a project could be evaluated on the basis of a comparative measurement of global democratic performance, while the intrinsic value might be gauged according to the expectations and goals of participants in the project. For practitioners, incorporating mechanisms for both extrinsic and intrinsic evaluation during the development or planning stage of a project often led to difficulties when the extrinsic goals favored by donors were incommensurable with the intrinsic goals desired by recipient parties working “on the ground” (Burnell 2007).
Part of the difficulty presented by these two forms of evaluation stems from the fact that intrinsic goals are necessarily context dependent and therefore somewhat better at identifying important intermediate steps in long-term democratization processes. Extrinsic evaluations, on the other hand, generally refer to standards that experts establish in the broader field of development to provide comparative measurements of democracy in a global context. Such evaluations are much more likely to favor outcomes, benchmarks, or other “threshold” measures that imply a teleological approach to democracy. In fact, the United Nations Development Programme has published two editions (in 2003 and 2007) of Governance Indicators: A User’s Guide meant to introduce and summarize these various indices for practitioners. While the guide covers most possible extrinsic measurements that might influence democracy assistance program evaluation, the key insight of this guide is that there is no universal approach or consensus on how to measure concepts such as democracy or governance.
This ambiguity speaks to a central problem that has motivated debates about evaluation within the field. On the one hand, following the logic of neoliberal institutionalism—which assumes institutions in recipient countries are “broken” and potentially corrupt—donors often are inclined to demand accountability and transparency regarding the use of funds. The largest donor organizations fund projects in many different regions of the world and therefore favor evaluation mechanisms allowing them to regularly track the outcomes of each project in a way that facilitates comparison between projects—that is, tracking which are most “successful”— and speaks to their specific “organizational objectives” (Burnell 2007). On the other hand, this same logic dictates that social problems such as inequality or disenfranchisement are merely symptoms of faulty institutions and that solving these problems requires long-term processes aimed at building up the capacity of governing institutions. It is no wonder, then, that the problem of evaluation is so difficult for actors in the field of democracy assistance, who must produce universally comparable results for projects that are inherently situational, contingent, and processual.
Government agencies and other donor organizations have attempted to deal with this problem in the same way that many other fields have worked to improve evaluation mechanisms, by “improving” the quality and quantity of the data gathered through evaluations.16 In 2008, USAID commissioned a National Research Council (NRC) study designed to assess and improve upon existing methods of evaluation of the effectiveness and impact of democracy assistance work. Of the goals outlined, one of the more important issues for the industry was developing “an operational definition of democracy and governance that disaggregates the concept into clearly defined and measurable components” (National Research Council 2008: 29). The methodologies of evaluation and knowledge production recommended by the NRC used language that was an interesting mix of scientific empiricism—calling for “hard empirical evidence” derived from “rigorous impact evaluation methods” based on scientific practices such as “randomized trials” and comparison with “control groups”—and management ideals borrowed from the private sector, such as “strong leadership” and “strategic visions” (2008: 219-220). The key concept that unites these ideals is the call for better “results” that carry both the legitimacy of science and the practical usefulness of common measurements. In addition to further emphasizing the importance of extrinsic measurements for donor organizations such as USAID, the formula offered by these evaluation mechanisms represent a broader confluence of scientific norms and managerial logics that is increasingly common in other large public organizations.
The historical and cultural shift toward neoliberal institutionalism in the field of development has followed a concurrent shift in the organizational practices of public institutions, a shift that critical scholars have labeled “new public management”. In his provocatively entitled book In Praise of Bureaucracy, Paul du Gay (2000) argues that management “reformers” in both the UK and the USA used the near-universal opposition to bureaucracy’s frustrating characteristics (waste, inertia, excessive red tape, etc.) to excitedly embrace business-inspired principles of economic efficiency as the highest values within government institutions. This support for “new public management” followed a management revolution that swept the corporate world in the 1970s and 1980s (Osborne 1992), aiming to make companies more “flexible” and less rigidly fixed in what were perceived to be inefficient ways of doing business. Luc Boltanski and Eve Chiapello (2005) went so far as to characterize this revolution as a “new spirit of capitalism”, which embraced flexibility and autonomy as foundational normative principles of contemporary capitalism. In the field of international development, organizations funded directly or indirectly by public agencies have similarly embraced the ideals of new public management.
As a result, organizations such as USAID have placed much more emphasis on ideals of economic efficiency and reduced waste.