Four Evaluation Models

Program evaluations can be structured in a variety of ways. One might follow the goal-free approach of Scriven (1991), the transactional approach of Rippey (1973), or elect a more standard goal-based evaluation technique. All evaluation models and evaluation techniques have one attribute in common—a focus on the object of the intervention or innovation in question.

Here are several models and one method of evaluation that will help you conceptualize the task you have set out to do. This chapter will provide a general overview of the approach and mechanics of these models. What this chapter will not cover is all of the intricacies of planning and implementing each model. If a model piques your interest, refer to one or more of the references included for the model for further information.

Discrepancy Model

The discrepancy evaluation model, developed by Malcolm Provus (1971), is used in situations where there is an understanding that a program does not exist in a vacuum, but instead within a complex organizational structure. The model assumes that the aim is not to prove cause-and-effect relationships but to understand the evidence well enough to make reasonable assumptions about cause and effect. In other words, there is more interest in why something might have occurred rather than the fact that it did occur. A program is examined through its developmental stages with the understanding that each of the stages—which Provus defines as design, installation, process, product, and cost-benefit analysis—includes a set of standards of performance. The program developers had certain performance standards in mind for how the program should work and to identify whether it was working. This model helps you to make decisions based on the difference between preset standards and what actually exists.

The program cycle framework used for this model corresponds roughly with the program cycle framework presented in Chapter Three, but focuses on points for discrepancy evaluation. The design stage here equates to the needs analysis and program planning stages; installation and process are parts of the implementation stage, where formative evaluation is done; and the product and cost-benefit analysis stages equate to a summative evaluation stage.

Use of the discrepancy model begins with a meeting to work on an evaluator's program description. All levels of program staff are invited and the large group is divided into smaller, workable groups. These groups respond to questions developed by the evaluator to elicit their ideas on how their program is designed. The resulting description of the design is then compared to design standards, which may be devised by the sponsor or drawn from standards for the field or from some other source. The discrepancies that are observed, if any, between the standards and the developed design are communicated back to the staff for review and action. Now the evaluator can use the assessed design in the installation (or implementation) stage as the standard with which to compare the program's operation. The evaluator looks—with standards in mind—at staff and at clients and how they move through the program. A discrepancy evaluator's role is to determine the differences between what is and what should be. Again, this information is communicated back to the staff for any midcourse corrections.

In the process stage, there is a comparison between what is being accomplished (by clients, staff, and others) and the interim products that were anticipated. Here the evaluator communicates the degree to which these interim products have or will be achieved. In the product stage, the evaluator compares the degree to which the end products—for example, student learning, behavior change, and increased productivity—are in line with what were identified as anticipated end products in the original design.

In the final stage—cost evaluation—the evaluator compares the cost of similar programs having the same or a similar end product. Using the conclusions from this stage (and perhaps from the product stage as well), sponsors can make a policy decision to continue or end the program. Usually this final stage is referred to as return on investment or cost-benefit analysis.

The discrepancy model is useful to a program staff that is interested in and able to have an evaluator working with them from the outset of program operation. The strength of this model is the involvement of staff in determining and using the evaluation criteria and standards.

 
< Prev   CONTENTS   Next >