Quality of the process and outputs

The DAC Quality Standards for Development Evaluation describe the basic characteristics of a quality evaluation process and report (OECD, 2010a). Evaluation reports should provide a transparent description of the methodological approach, assumptions made, data gaps and any weaknesses in the report to allow the reader to make an informed judgment of its quality. Systematic quality checks are often put in place by EvalNet members to review terms of reference, evaluation methodologies, draft reports and final reports against the DAC standards, to avoid that the quality of the project outputs are affected by a poorly designed evaluation or unclear or overly ambitious terms of reference. Guidelines and templates guide staff in the elaboration of Terms of Reference (ToR) for evaluations, in the briefing of experts and the editing of evaluation reports.

As evidenced in the member profiles and through interviews, several evaluation units underline that quality assurance is the responsibility of the contracted evaluation team, based on standards and clear quality criteria that must be laid out in the technical bid, and with a request to attach a quality assurance report to the evaluation report. These evaluation units subsequently perform the quality control. Quality control checklists are aimed at standardising and formalising practices when reviewing evaluation deliverables. In the AFD Evaluation Unit, quality control starts with a quality-at-entry grid that ensures the evaluability of the project to be evaluated, the soundness of the ToR, the methodological approach and the participation of the right experts in the Reference Group.

The Evaluation Unit in the European Commission ensures that external evaluators know the criteria on which their report will be assessed from the start, as the main quality criteria are set out in the terms of reference in the form of a quality grid. In accordance with the “Better Regulation Guidelines”, it is the joint responsibility of the specific steering group and the evaluation manager to assess the work of evaluators during the process and to fill in this grid. Similarly, the French AFD and Portuguese Camoes, I.P. also use a grid or matrix to assess the quality of the evaluation product. AFD fills in an overall quality assessment form and sends it to the external evaluation team at the end of each evaluation. Standards/ benchmarks are used to assess and improve the quality of evaluation reports.

In 2012, the World Bank Group developed new quality standards to improve approach papers for evaluations. One year later, further improvements were made by clarifying the roles and responsibilities, developing guidelines on the use of advisory panel experts, and introducing a “quality enhancement review” process with guidance materials for better methodologies. The Annual Report 2014 notes that a more comprehensive quality assurance framework has been put in place, which includes a new process for selection and prioritisation of the work programme, and new quality standards for evaluation reports (reviewed by the management and external peer reviewers) (World Bank, 2014).

The IEG work programme 2016-18 includes continual improvements in evaluation methodology and quality assurance. The quality assurance framework thus includes a refined process for the selection and prioritisation of evaluations for inclusion in the work programme, new quality standards for approach papers, and “Stop Reviews” of evaluation reports. “After Action Reviews” have been introduced allowing IEG staff to have a structured debriefing to analyse what happened, why, and how the work could have been done better.

In general, the quality of evaluation implementation is assessed throughout the process. For example, in the Netherlands, IOB has a quality control system which includes internal peer reviewers and reference groups. The IOB evaluator chooses, in consultation with the IOB Director, which experts are to act as internal peer reviewers. The reference groups are partly external, consisting of representatives of policy departments involved in the evaluation, organisations and/or local bodies, and external experts. Each reference group is chaired by IOB management and meets at key points during the process, for example to discuss the draft terms of reference, interim reports and the draft final report.

 
Source
< Prev   CONTENTS   Source   Next >