Policy Evaluation

Policy evaluations should not differ much from any other evaluation. But the evaluation of policy measures and programs faces a number of challenges and characteristics. Some of them have been discussed above, for example the time-frame for the evaluations and the possible self-interest of policymakers. It has to be kept in mind that politics usually are a multi-actor game. Therefore, evaluation should include all actors, active or passive (HANBERGER 2001). The job of an evaluation is not only to uncover the effects and efficiency of policy measures, but also to inform. As PICCIOTTO (1999) states, “Public policy is informed both by economics - the queen of the social sciences - and by evaluation - the overarching meta-discipline.” He concludes that evaluations should serve to decrease information asymmetries between principals (i.e., in the case of policy evaluation: the people) and agents (i.e., the state). The questions that politicians may ask are, for example, the following: “How will pursuing this outcome affect the others we are interested in? What effect does this output have on the desired outcomes? What would be the cost of the output in the future? How can we get more output for the same level of inputs? How should our agencies be organized?” (Bushnell 1998) The questions do not differ greatly from those asked from program managers or other institutions that look to evaluation to answer their questions about effects and efficiency of programs and measures. In politics, though, the playing field is far larger, the intended and unintended effects usually affect a larger group, and the dimension of measures is far greater in financial as well as personal and institutional aspects. At the same time, an evaluation is complicated by the large number of concurrent measures that may or may not have an impact on the subject to be evaluated, the number of possible external shocks that may occur, etc.

Another specific element for policy measures are the two levels of program or policy implementation: the political, administrative and financial implementation; and the ‘street level’ implementation. The former is often called the managerial level implementation (Barbier, SIMONIN 1997). Evaluations must therefore include both levels. In addition, evaluations of policy measures are often a combination of all three types of evaluations: ex ante, ex post and accompanying the implementation. A general model of target-oriented policy evaluation is the one below (cf. Figure 50).

Target-oriented policy evaluation

Figure 50: Target-oriented policy evaluation

(SCHMID 1997)

The first step is the (1) identification of the target(s). This is to be done on the basis of an analysis of market failures. Then (2) the policy measures are decided upon, their impact and the implementation process itself being influenced by the (3) socioeconomic and institutional conditions. This includes the organizations involved in administering the programs, but also network structures (internal as well as external) and feedback loops concerning the outcomes. This is followed by an (4) impact analysis, i.e., the analysis of policy adoption by observing financial and physical flows that are immediately recognizable and a (5) cost-benefit-analysis that must include opportunity costs[1]. Schmid concludes the cycle with the final (6) policy choice and the assessment whether the (1) target has been reached.

This cycle can be adapted for ex ante, accompanying and ex post analyses. In the case of ex ante evaluations, (2) is simply the number of possible measures that are assessed and whose impact is to be predicted; (3) are predicted conditions or, in the case of the institutional aspects possibly known conditions; the analyses (4) and (5) are estimates that are the basis for the final policy choice (6). For an accompanying evaluation, the policy measure must be chosen in the second step, but the cycle can be repeated as often as the budget and time frame allow: in theory, the chosen policy is adapted or, in the most extreme case, even changed if the analyses show that the measure chosen was not appropriate or useful. In practice, this is a highly unlikely course. Policy measures have been decided upon in an often time-consuming and predetermined decision process and it is improbable that a measure can be easily adapted. Secondly, as mentioned above, there is the aspect of time. If the impact of a measure is only observable after a certain length of time, then an accompanying evaluation may rule out policy measures that would have been effective. Therefore, accompanying evaluations will usually only evaluate the implementation and such measures as are easily adaptable. Ex post evaluation leaves out step (2), as this has been predetermined. Step (6) may include recommendations for future choices of policy measures.

HANBERGER (2001) claims that the relevant questions to be asked in the evaluation of policy measures are the following: “What is the context? Who are the key actors and other stakeholder? What is the policy problem? What are the relevant variables and outcome criteria?” During the decades, several recommendations for politically relevant evaluations have been made: in the 1970s, Banner et al. (1975) recommended that the evaluator should be completely independent and that the evaluator(s) should be well-trained and able to draw from an “invulnerable source of adequate funds”. They also support a very open approach to the evaluation and the communication and discussion of possible outcomes of the evaluation as well as the corresponding implications. In the 1980s, Palumbo (1987) stresses the importance of recognizing the “different information needs by different stakeholders at different times in the policy cycle” when conducting an evaluation with political relevance. HEDRICK (1988) suggests specifying the full scope of issues to be considered, even if it is impossible to include everything in the actual evaluation. He also stresses that limitations to the evaluation must be stated clearly and that the presentation of the results must include nontechnical statements to make sure they are understood by all stakeholders. In the same vein, Turpin (1989) reminds evaluators to always discover which stakeholder has which motivation and to make sure that all stakeholders are included in some way to prevent a biased evaluation. He also agrees with HEDRICK in his recommendation of an open discussion of limitations and difficult decisions.

  • [1] Opportunity cost is the utility of the best alternative that has been forgone in favor of the chosenoption.
 
Source
< Prev   CONTENTS   Source   Next >