Choosing an Evaluation Model

More Than One Way to Evaluate

As the committee at Grandview learned, choosing an evaluation model would help all the participants to conceptualize the evaluation task. They would discover, however, that there is a wide range of models from which to choose. Indeed, modern evaluation practice has developed extraordinarily complex and sophisticated approaches that are beyond the scope and skills of the average manager conducting an evaluation. But evaluators at any skill level need to have some basic understanding of commonly used models and the ability to choose a model that fits their evaluation needs.

Program and project managers need to have a good understanding of existing evaluation models. These models have different approaches, different philosophical underpinnings, different data collection sources, and different roles for the evaluator and project staff. Most evaluators use their knowledge of and experience with these models when planning the most appropriate evaluation design for a project. A seasoned evaluator will be able to select the appropriate model given the situation to be evaluated. Also, the evaluator should be able to communicate to the project director regarding the intricacies of each model and their reasons for recommending one over another. Such an interchange goes a long way toward building the kind of trust that is needed between the evaluator and the project director, as well as between the evaluator and other stakeholders in the evaluation. This trust is essential both in the successful conducting of the evaluation and in the eventual use of the evaluation results for program improvement.

All the models for evaluation differ from research strategies in that evaluation results are provided to the appropriate stakeholders for the purpose of program or project improvement. The purpose of research, in contrast, is to draw causal links between observed phenomena and to add to the knowledge base on those phenomena, with the audience being the professional field in general. Thus, an evaluator should beware the common mistake of confusing research with evaluation and consider for any model whether its focus is research or evaluation.

This distinction bears further explanation. Research involves the scientific method, which controls variables such as behaviors in an attempt to explain and predict. It assumes that all variables can be controlled and that there are discoverable causes; it is an orderly process. Research leads toward the development of knowledge; its inquiries stretch the envelope. Furthermore, the research process takes place in a recognized and defined arena in which a formal hypothesis leads to the development of a research design. Data are collected and analyzed, conclusions are drawn, and the hypothesis is either confirmed or rejected.

For example, a local hospital believes that patient education for cardiac patients will be more effective if it involves more than the traditional discussion and written materials after a cardiac episode. Instead, patients are provided with a support group made up of other cardiac patients who meet on a weekly basis at the hospital. Patients are randomly assigned to attend or not attend this support group meeting after they receive the traditional patient education. They are monitored for six months to determine whether any changes in lifestyle—such as eating habits, exercise, and taking medications—can be observed in the two groups.

Here a hypothesis was developed that the support group integrated with patient education would be more effective in changing patient behavior than patient education alone. Subjects are identified, a treatment is delivered, and data are collected and analyzed. Using the data, researchers accept or reject the hypothesis. These findings will add to the knowledge base on patient education techniques, and a hospital that wants to design a new program or change its current patient program may use these findings. The value of research to a program is discussed further in Chapter Eight.

In contrast, when you evaluate you are trying to learn what is going on in a particular program for people who are interested in that program. You are not trying to expand knowledge (although this does occur because many evaluation reports are published). You do:

1. Establish evaluation questions (use your EPD to do this)

2. Create an evaluation design

3. Collect data

4. Analyze data

5. Draw conclusions from data

6. Make decisions on a program's efficiency, effectiveness, and impact

7. Report to stakeholders

As you expand your understanding of evaluation and reach the point of choosing an evaluation model, you will want to understand the meaning of two other terms: qualitative and quantitative evaluations. Quantitative evaluation emphasizes facts that can be stated numerically, and qualitative evaluation emphasizes understanding through verbal narratives and observations rather than through numbers. (These terms are discussed in greater detail in Chapter Six.) Both quantitative and qualitative methodologies and data can be used in all the models. In the preceding example of the patient education program, if the data collected were observations of patient behavior, anecdotal accounts from the patients and family members, or responses to a survey that the patient completes, they might be considered qualitative in nature. If the data collected, however, were blood cholesterol levels, electrocardiogram readings at several points over the six months, or the patient's weight, they would be considered quantitative.

 
< Prev   CONTENTS   Next >