Evaluation Design Summary

An evaluation design describes how an HP-DP program has planned to minimize or eliminate major, systematic (non-random) biases for pre-existing characteristics of participants. An experimental design, if successfully implemented, typically asserts control over biases in three major categories, Measurement Bias, Selection Bias, and Historical Bias, by equally distributing error among the E and C groups of participants. Randomization of participants at each evaluation-program site, or stratification and matching of sites, equally distributes by chance (if successful), all measured and unmeasured participant characteristics. This process should establish at least two equivalent groups at baseline: a C group to typically receive a “basic” HP-DP intervention (X1) and an E group to typically receive a “basic + best practice” HP-DP intervention (X1 + X2 + X3).

It is important to stress: a randomized design does not always “control” for the multiple dimensions of the three bias categories. E versus C group equivalence at baseline and at follow-up should not be assumed: it must be empirically confirmed. Although rare, if the E and C groups significantly differ on a baseline viable(s), this difference will usually be due to random error, not systematic error. Analytical methods, for example, Analysis of Covariance, may be applied to the impact data to adjust for baseline differences. During the planning and formative evaluation phases, an evaluation team needs to train staff, prepare an implementation plan, and conduct pilot tests to identify and to address each source of bias. The methodological and implementation issues in selecting a design, shown in Table 3.5, are described in the following sections and case studies.

 
Source
< Prev   CONTENTS   Source   Next >