The (Lack of) Intended Use of Indicators - and Ways of Enhancing Use

The degree to which indicators are used for their intended purposes varies greatly across indicator types and policy areas. In particular, there seems to be a rather strong dichotomy between the wide use of established economic (for example, GDP, unemployment rate, levels of government debt and budget deficits) and performance management indicators on one hand, and the far more infrequent use of the various sectoral, cross-sectoral and sustainability indicators on the other. Government performance measures are certainly used for their intended control purposes. This can take place in internal venues, and draw on official sources of knowledge, when central government departments and agencies use performance indicators to allocate resources, or public service managers to motivate employees to improve performance; to trigger corrective action; to compare, identify and encourage 'best practice'; to plan and budget. The use of indicators by auditors and regulators to evaluate the provision of value for money by public sector organizations can, in turn, be defined as use in external venues, drawing on a variety of official and unofficial knowledge sources. The degree to which the various sectoral indicators (including for example, indicators for monitoring national sustainable development strategies) are used varies widely. Perhaps most frequent is the use of these indicators in mandatory reporting exercises by government departments, which may take place in either internal (for example, annual sectoral reporting, or public sector performance measurement) or external venues (for example, obligatory EU policy assessments, OECD country reviews). These exercises draw mainly on official sources of knowledge, including those produced by international organizations.

The 'alternative' composite indicators of progress, wellbeing and sustainable development, in turn, are actively used in particular by their producers and policy advocates in order to promote their preferred world-views, in other words, in venues external to the government, drawing on unofficial data sources. The uptake of such indicators by national and EU-level administrations in their daily work and decision making is far less widespread, probably largely due to the 'unofficial' status of the data underpinning the indicators. Some composite environmental and sustainability indicators, in particular the ecological footprint, have found a certain echo in the media and to a limited extent in public debate (for example, Morse 2011). The recent and ongoing effort by various governments - including collaboration with national statistics offices -to develop 'official' alternative indicators of progress and wellbeing (for example, Seaford 2013; Sebastien et al. 2014) marks a shift in this indicator work towards the internal-official quadrant of the scheme in Chapter 1. However, the main expectation is that these indicators operate in the external venues, through public debate, and subsequent uptake by policymakers (for example, Seaford 2013). The extent of actual use and influence of these indicators still remains uncertain, not least because of the frequent doubts about their scientific credibility and technical robustness (Sebastien et al. 2014). Finally, in many cases indicators are not used, simply because the potential users are not even aware of their existence - a phenomenon that also obtains for indicators within the internal-official quadrant (for example, Lehtonen 2013).

The hope that users would consider for instance sustainability or environmental indicator sets in their totality, reflecting upon the trade-offs between the various indicators, has proven largely illusory. Especially in external venues, indicators are used selectively, interpreted out of their context, used as political ammunition rather than as a rational input to policy, or simply ignored. This is often a combined result of attributes relating to the indicators themselves, the actor 'repertoires' - 'stabilized ways of thinking and acting (on the individual level) or stabilized codes, operations and technology (on other levels)' (van der Meer 1999, p. 390) -and the broader policy context. Relevant factors may include excessively loose linking between reporting schemes and policymaking; lack of trust of potential users in the indicators (government actors may be institutionally prevented from using 'unofficial' data sources, while external actors may mistrust government data); lack of resources within the administration; or neglect of user concerns in the design of indicator systems. Several preconditions have hence been identified for instrumental use of indicators: relevance for the intended user (representative, simple and easy to interpret, reflecting ongoing changes in society, ability to clearly communicate success or failure), scientific and technical quality (ideally based on international standards and norms, and on a clear conceptual framework), measurability, context-specificity and adaptability, linking with regular monitoring and evaluation exercises, and clear identification of target groups and expected indicator functions (Pinter et al. 2005, p. 16; Hezri 2006, p. 172; Bell et al. 2011, p. 5; Seaford 2013). There should be an adequate but imperfect match between the 'repertoires' of the indicator users and the conceptual framework conveyed by the indicator, in other words, indicators should be salient, credible and legitimate to their expected users (see Chapter 3, this volume; Cash et al. 2002). The relationships and determinants of salience, credibility and legitimacy are complex, and there are obvious trade-offs between the three criteria. For example, the frequent debates and disputes concerning the validity of rankings conducted by international organizations illustrate the vagueness and fluidity of the distinction between 'official' and 'unofficial' sources of knowledge.

The temporal aspects are also vital in determining indicator use. Frequent complaints by potential users include the lack of timely, up-to-date indicator information (for example, Rosenstrom and Lyytimaki 2006) and the claim that backward-looking indicators are not useful in policy formulation - hence the greater appeal of forward-looking policy formulation tools such as cost-benefit analyses (see Chapter 7, this volume) and scenarios (see Chapter 3, this volume; Lehtonen 2013).

These perceived indicator qualities, in turn, are strongly shaped by the process of indicator production - the extent to which the actors participating in indicator processes are seen as legitimate and credible. Collaborative processes of indicator development may foster agreement on problem definitions, policy objectives and policy measures (Bell et al. 2011). In line with findings from evaluation research, the process of indicator production - through social learning, networking, problem framing, focus and motivation - is often equally or even more influential than the 'use' of the final, published indicator (for example, Mickwitz and Melanen 2009; Lehtonen 2013; see also Chapter 2, this volume).

Among the factors relating to the policy setting, those that shape indicator use include the existence (or absence) of an 'indicator culture', the weight of the policy area in question among policy priorities (for example, Sebastien et al. 2014), and the degree of agreement among key actors on

Table 4.3 Examples of indicators and their use in different policy venues

Examples of indicators and their use in different policy venues

problem definitions, policy objectives and policy measures (for example, Turnhout et al. 2007; Bell et al. 2011, p. 108). Use tends to be enhanced when the policy agenda in question has remained stable over time (Bell et al. 2011, p. 10), yet situations of crisis can open 'windows of opportunity' for enhanced indicator use, as the prevailing institutions and frameworks of thought are called into question (Hezri 2006, p. 172).

Table 4.3 presents a number of selected examples of indicators and their intended and actual use, classified according to the distinctions between internal and external venues, and between official and unofficial sources of knowledge.

 
< Prev   CONTENTS   Next >