Finally, Chapters 2-12 examined what effects, both intended and actual, the policy formulation tools produce when they are employed. The policy instruments literature has been struggling to answer this question, at least for implementation instruments, ever since Salamon (2002, p. 2) noted that each instrument imparts its own unique spin or 'twist' on policy. Not surprisingly, the less mature sub-field of policy formulation tools has much work to do in relation to 'effects'. Indeed, one of the striking findings from the tool-focused literatures summarized in Chapters 2-7 is how few of them have even identified it as a priority research topic. Some literatures (around CBA and computer models for example) have made more progress than others, but in general, the level of critical engagement has been low. More often than not, certain effects have simply been presumed to flow from the selection of particular tools (for example, that using CBA results in the identification of the pareto optimal policy solution).

As noted in Chapter 1, this collective failure probably has much to do with the disciplinary background of the contributors, but it also reflects an entirely understandable desire to stay anchored in the relatively clear-cut world of textbooks and typologies. Nonetheless, the chapters do suggest some potentially useful categorizations that could form the basis of future work. For example (and drawing on Turnpenny et al. (2009, p. 648)), a broad distinction can be drawn between 'substantive' effects (the extent to which tools generate change - or work to ensure continuity - in a given policy field) and 'process-based' effects (in other words, system-wide effects which arise from the use of particular tools). A wide array of substantive effects are flagged up in the chapters, ranging from learning around new means to achieve policy goals (predominant amongst tools such as CBA, but also computer modelling) to heuristic-conceptual effects on problem understandings (see for example Chapters 2 and 5). Large-scale, system-wide energy models may play an important role in facilitating adjustments to new 'policy images', through the development of new policy paradigms and policy objectives (Chapter 12). More fundamentally, some tools (for example, participatory backcasting) have been developed with the avowed aim of facilitating 'out of the box thinking', that restructures actor preferences in a profound way. Meanwhile, the procedural effects are potentially also very wide ranging. For example, Chapter 11 argues that indicators help to channel political attention -especially among overloaded oversight bodies - such that a 'broader critique' of the policy status quo becomes less and less likely. In addition, some participatory tools such as the devil's advocate technique and participatory backcasting have the aim of generating new understandings and uncovering extant political power relationships.

A second important distinction relates to the difference between intended and unintended effects. We have already noted the difference between the 'imagined' effects that the advocates of tools aspire to provide (to use the terminology employed by Atkinson in Chapter 7) and their 'actual use'. In some of the chapters, the unintended effects are presented positively (as new problem framings - see Chapters 2 and 4 for example) whereas in others, they are presented much more negatively (for example 'gaming the system', 'closing down' debate, and nurturing 'reductionist' thinking are all noted in Chapter 4). To a large extent, the difference is one of prior expectations, purposes and ultimately values. Thus, by their very nature, the more procedurally inflexible tools such as CBA appear more prone to performance deficits. But more open, participatory tools can also produce unexpected effects; for example, Chapter 2 recounts how back-casting approaches all too easily entrench political differences and forms of participation. Consequently, the new sub-field of policy formulation research should be careful to pose more probing questions (for example, unexpected by whom and why?) rather than assume that everything which is unexpected is necessarily bad (or the opposite!). Finally, some effects may be extremely difficult to categorize. For example, Chapter 11 tells the story of how, paradoxically, in the case of indicators, 'a set of tools designed to shift the political focus onto outcomes was deployed in a way that resulted in a preoccupation with process'.

To conclude, understanding effects arguably constitutes the biggest analytical challenge of all, but one which the nascent sub-field of policy formulation is beginning to engage with. Chapters 2-12 already suggest that it will require very careful and patient diachronic forms of analysis (cf. Owens et al. 2004), sensitive to the multiple rationalities that motivate actors to use particular tools in the first place. At present, there remains a definitive 'pro-use' bias in the tools literatures (indeed Chapters 5 and 6 explicitly focus on known examples of use). Indeed, the authors of Chapter 2 argue that political elites may be reluctant to explore the potential of more open participatory tools and methods that typically aim at opening up current problem framing and thus imperil their control. Yet experts in policy formulation tools may also be unwittingly sustaining this blind spot, especially if (as seems to be the case for participatory tools and to a lesser extent for indicators) they cannot agree on what their purpose should be, hence the prevalence of very open evaluation criteria that are extremely difficult to apply.

< Prev   CONTENTS   Next >