Other Levels

There are other levels of evaluation as defined by Kirkpatrick (1994; 2009). These levels refer to the eventual use of the evaluation data and who might make use of the results. Kirkpatrick's four levels of evaluation are as follows:

Level 1, Reaction: Participant impressions

Level 2, Learning: Learning acquired

Level 3, Behavior: Application of the learning

Level 4, Results: Extent that targeted outcomes occur for the company, agency, or school system

In level 1 you are examining the perceptions of individuals who were directly involved as clients of your program. You are interested in their perceptions of how they benefited from the program, what they thought of the program activities, and how they might use what they gained from the program. Program staff will be particularly interested in this level because the feedback will tell them how their efforts are being perceived and used.

In level 2 you are measuring the effectiveness of the program in doing what it set out to do. At this stage you usually compare a set of standards— criteria, goals, or objectives—with the actual results. You might perform pretesting and posttesting of clients to ascertain their change, or, to see the benefits of program participation, compare one group who received the activities of the program with a group who did not. Here you measure the extent to which individuals acquired the knowledge, skills, and attitudes originally projected. Program staff and sponsors will be particularly interested in the results of this level of evaluation.

In level 3 you are attempting to discover the overall impact of the program on clients: Did the program change their long-term behavior, attitudes, or performance? Were the changes observed in the level 2 evaluation sustainable over time? To what degree did they apply what they learned? Sponsors, program staff, and clients will be interested in this level of evaluation.

In level 4 you want to determine the extent to which the parent institution (sponsor) benefited from the program. Here you examine whether targeted outcomes have resulted due to the activities examined under the first three levels. Here your focus shifts from the relative benefit to the client to the relative benefit to the sponsor. To gauge efficiency, you need to determine whether the program's outcome warranted the sponsor's expenditure of resources. To gauge effectiveness, you are investigating whether the productivity of a certain group of employees increased. And to gauge impact, you need to discover whether the program changed the performance, the product, or the image of the sponsor. By addressing these macro concerns at this level of evaluation, you enable decision makers to see more clearly what programs do to help their parent institution.

Formal Reasons to Evaluate

To know what type of evaluation to use, managers first set out their reasons for undertaking an evaluation. In the public sector, for example, federal or state-sponsored programs demand third-party or internal evaluations; these evaluations are not optional. The evaluation may be required for fiscal purposes, with the program's future funding hinging on the results of the evaluation. Or an evaluation may be needed for comparison purposes to find out which of several methods would be the most effective to continue funding.

Certain questions need to be addressed in a formally mandated evaluation. Questions of efficiency might be: When did you begin the program or project? How much did it cost? How long did it take? Questions of effectiveness might be: What did you do? How well did you do it? What were the outcomes? Questions of impact might be: Did the program influence lives? Did the program add value? Table 1.1 shows sample questions for an evaluation of a project to develop an affirmative action plan.

Table 1.1 Questions of Efficiency, Effectiveness, and Impact (Affirmative Action Example).

Levels

Resources (manager working on the project)

Activities (obtaining commitment of representatives to serve on task force)

Strategy (task force of employees to develop plan)

Objective (affirmative action plan with support of employee advocate groups)

Efficiency

Cost per person contacted versus planned cost?

Cost per group endorsing plan?

Cost of group-developed plan versus manager-developed plan sent to groups for comment?

Cost per group actually supporting plan versus planned cost?

Effectiveness

Was the selection of this manager the correct selection?

How does diversity of the groups compare to diversity of the company?

How many groups endorse the plan?

How many groups actually adopt the plan?

Impact

How many staff hours were expended on this project versus another project?

Has this generated interest from employees in sharing new ideas?

Has this approach been used in solving other problems?

Have the number of grievances filed been reduced?

Another formal reason for evaluating is to make a case for a new program. For example, a superior may want you to evaluate or you may do it for yourself. Often teachers learn about new ideas at conferences, through reading, or from talking to colleagues; however, they may feel tentative about trying out the new ideas because they are under the scrutiny of administrators, parents, and other teachers. A person who is committed to a process that has merit will evaluate the process so that support can be garnered on the basis of data. Gut feelings, perceptions, innuendo, and anecdotes are comforting, but they are not convincing to the people who require more objective evidence. Data may not always convince people, but data at least act as a common currency to demonstrate the value of your case.

Good managers routinely collect data about their programs or projects in anticipation of the need to justify. These are some good questions to ask: How does this program compare to similar efforts? What would I have done if I had not begun this program? Are the goals or objectives justifiable as viewed by clients, staff, or the funding source? Did we accomplish what we set out to do?

The final reason for evaluating is to improve or change a program. In this instance, you collect data to show to people who will make decisions about changing or improving a program. Questions to answer are these: How will I use the evaluation in the program planning process? Is it meant, for example, to upgrade or change program personnel or to ensure accountability for expenditures?

Evaluation may also be performed as part of the preparation to seek funding from either internal or external sources. A good example is the teacher who learned at a recent conference about a new technique of computer conferencing used to improve writing skills. Knowing it was highly unlikely that the school had the resources to equip her classroom with sufficient technology, she brought her own computer from home and allowed the students to experiment with conferencing in their writing assignments. An evaluator then used the data collected from this activity to support her efforts to obtain funds from the school system or some other external funding source for the purchase of the required technology. Granted, this evaluation effort will do little for the current learners. The data collected from them, however, will benefit future learners in that teacher's classroom. This evaluation's most useful contribution serves the program planning process.

 
< Prev   CONTENTS   Next >