Choosing the Right Model
The logical question at this point is, "How do I know which model suits my particular situation and needs?" This question is answered by looking back at your answer to the original question, "Why evaluate this program?" If it was to meet some mandate of a funding source or management, you might want to employ the goal-based model. If it is to learn something about your program so that staff can improve service delivery, you might employ the systems analysis or goal-free models. If your intent is to critically examine certain aspects of your program for reduction or promotion, you might use the decision-making, art criticism, or transaction models. The adversary model would be used when the purpose of the evaluation is to settle differences of opinion between stakeholders.
Table 5.1 will help you to decide which model to use.
Table 5.1 Choosing a Model.
Model |
Intended Outcome |
Evaluator's Tasks |
Sample Evaluation Questions |
|||
Adversary |
Resolution of differences of opinion |
Facilitation |
What are arguments for and against program components? |
|||
Art Criticism Critical reflection, improved standards |
Expert judgment |
Would a professional approve of program activities? |
||||
Decision-Making |
Effectiveness, impact, quality |
Data collection, analysis, interpretation |
Was the program effective? What aspects of the program were effective? |
|||
Discrepancy |
Compliance with standards |
Facilitation, monitoring, data collection, analysis, interpretation |
How did the program perform compared to standards? |
|||
Goal-Based |
Efficiency, effectiveness, impact |
Data collection, analysis, interpretation |
Did the clients change (grow, learn)? |
|||
Goal-Free |
Usefulness, impact |
Data collection, analysis, interpretation |
What happened in the program? |
|||
Systems Analysis |
Efficiency, effectiveness |
Monitoring, data collection, analysis, interpretation |
Were the expected outcomes achieved? Were the expected effects achieved efficiently? |
|||
Transactional |
Program understanding |
Participation, data collection, analysis, interpretation |
What does the program look like from different vantage points? |
|||
Evaluation Design Format
Now that you have chosen a model to use, you can return to the larger picture that we introduced at the end of Chapter One, the overall evaluation design format. This format introduces the components that may occur in any evaluation: evaluation questions, program objectives, activities observed, data sources, population samples, data collection design, responsibility, data analysis, and audience. Not all components appear in each of the models—the program objectives component is not used in the goal-free model, for example—but most are common to all.
Evaluation questions are central to all the models and all evaluations. As described in Chapter Four, these are questions that the evaluator or program staff or both develop to ensure that the evaluation results will address meaningful questions and lead to program improvement or promotion. These are much larger questions than those presented by program objectives.
Program objectives may be added at this point. These are the statements of intent that the program developers created to communicate what would be accomplished if the plan were implemented. These statements are the evaluator's friend because, if stated correctly, they will contain what is to occur (activity), who will be affected (client), the expected outcome (criterion), and how you will know (measurement). With this information, the evaluator knows what the staff will perform, what results they expect, and how they will measure program performance.
Activities are those specific activities that program staff will conduct for clients. It is important for the evaluator to know the activities and to which objectives they relate in order to identify a cause-and-effect relationship.
Data source is the instrument for data collection and recording. Depending on whether you will be employing qualitative or quantitative methods, your data source may be surveys, interviews, observation protocols, tests, or calibrated measuring devices. Your data source may collect new data from clients or contain already existing data on clients.
Sample refers to the individuals from whom you will collect data, specifically to the proportion of individuals in a program who are anticipated to participate in the evaluation. The term population refers to all the individuals who might be eligible to participate, and the term sample refers to those whom you will target to participate. You will learn more about sampling and sampling techniques in Chapter Eight.
Data collection design is the schedule on which you will collect data. Depending on the model you select and the purpose of your evaluation, you may need to collect data before the clients interact with your program (pre) and after they have partaken of your activities (post). On occasion, you might collect data while they are partaking (interim) so that you can monitor change. The data collection design also communicates whether only those partaking in the activity will be included in the data collection or whether you will select a group of individuals who will not partake in the activity (control or comparison group).
Responsibility refers to listing, as clearly as possible, what evaluation activities you will perform as the evaluator. Those evaluation activities that you will not perform need to be identified, along with which members of the program staff will be responsible.
Data analysis explains what statistical manipulations, if any, will be performed on the data collected. Each data set from each data source will probably have its own data analysis procedure.
Audience is the group for whom the report is intended. In some cases, the audience may be different for different evaluation questions. Sponsors, for example, may be more interested in evaluation questions directed at program efficiency and impact, whereas program staff would be more interested in evaluation questions on program effectiveness.
For example, if your task is to help sponsors or administrators decide the future of the program, you may elect to use the decisionmaking model. This makes the focus of the evaluation summative; you're not interested in how it's going at the moment but rather in the end results or effects. You need to report the program's effects to those who authorize or pay for both the program and the evaluation. So you can certainly say that you know the audience who will be interested in the evaluation you are doing.
Other Considerations
The several models discussed in this chapter are by no means the only evaluation models available to you. Others offer different structures, foci, methods, and so forth, and hybrid models contain aspects of two or more of the models.
In Chapter One, we presented two definitions of evaluation that are commonly accepted in the field:
• Evaluation is the systematic process of collecting and analyzing data in order to determine whether and to what degree objectives have been or are being achieved.
• Evaluation is the systematic process of collecting and analyzing data in order to make a decision.
Just by examining these two definitions, you can begin to identify which evaluation models might best suit an evaluator who is operating within one of these philosophical frameworks. Yet circumstances might arise in a particular evaluation that would require you to use whatever works for both the program and you.
Putting It All Together
In this chapter, we have presented the "meat" of evaluations and evaluation methodology. The models discussed provide a variety of ways to look at a program that is either clearly designed or not so clearly designed. The range of models allows an evaluator to be rather dogmatic in assessing a program against a preconceived set of standards and objectives, or fairly free to evaluate a program's worth based on what it produces—not what it said it would produce.
Organizations and the programs and projects associated with them have an abundance of people besides the evaluator who are assets to an evaluation. Because these people have firsthand knowledge of the program or project, their contribution to the selection of a model may be extremely valuable (Gray, 1998). The evaluator should choose the model only if those being evaluated prefer that the evaluator make the choice. Often those involved with the program see evaluation as one of those highly technical or complicated functions in which they should have no say, but that view is incorrect. All stakeholders should be involved in the decisions surrounding an evaluation, with the evaluator acting as the facilitator of the evaluation.
Using the chosen model, you create the evaluation format, which outlines the components of the evaluation matrix. These parts tend to flow in a systematic order, but they all stem from the evaluation questions. These questions represent the real interests of the stakeholders— what they want to know about the program in order to improve it.
Key Words and Concepts
Hypothesis: An assumption; something thought to be true; a tentative expectation
Quantitative: Focuses on numbers, measurements, inductive reasoning
Qualitative: Focuses on perception, understanding through verbal means; deductive reasoning