Practical Application

When XYZ Corporation—which aims for a sales representative to meet face-to-face with each client annually—decides to train its sales department, executives will want to look at how evaluation fits into the company's overall sales program. That goal requires taking a look at more than the current training program for sales personnel. XYZ will need to identify the standards by which training can be evaluated, evaluate its training and other possible programs, and then make decisions on the benefit to the company.

Think of the process in this simplified way: a company that places a high value on customer service will want to be sure its sales force sees clients face-to-face. Now the company wants to use its contact records and tracking program to plan the training program. Program planning will ensue and will include the criteria, goals, and objectives that allow for evaluation during and following training. XYZ knows from its simple tracking program the number of times customers are seen and who among the staff are seeing them. It also knows how much of its product is sold each year; thus, planners can build in benchmarks, the criteria of success that will guide their evaluation process.

Because evaluation means looking at change, evaluation shows up in every stage of the program. Monitoring begins as soon as XYZ chooses a training program. Have all possible trainees been recruited? Do these trainees have the prerequisite skills to succeed in the training? Do the trainers have the necessary knowledge and skill level to train?

Ongoing evaluation at this point allows for midcourse corrections and can make a huge difference in the success of the program. (Chapter Three discusses monitoring in more detail.)

Finally, evaluation is done after the program or project concludes. The evaluator looks only at final data. What happened to the trainees? Did the program do what it set out to do? This question, of course, brings you full circle to your philosophy. In XYZ Corporation's case, what is happening to the sales force? Are they meeting their goal of seeing each customer more frequently? You, the evaluator, can clearly see the final outcome with an evaluation at the end of a program cycle.

Formative and Summative Evaluations

Evaluations that focus on examining and changing processes as they are happening are called formative evaluations; those that focus on reporting what occurred at the end of the program cycle are called summative evaluations. These concepts are amplified in Chapter Three.

Putting It All Together

When you define evaluation, you can think in terms of cycles. Consider XYZ Corporation's sales training program and how its circumstances pass through the cycles of goals —> needs analysis —> program planning —> implementation —> formative or summative evaluation.

Goals

Needs analysis

Program planning

Implementation

Program Cycles

Formative or summative evaluation.

Fulfill XYZ Corporation's mission: to serve internal customers by providing training to develop sales skills and a highly motivated commitment to customer satisfaction. Serve external customers by understanding and providing for their needs, both stated and inferred.

Identify needs—for example, the sales force's need for an introduction to the concepts of the high-performance work environment, customer satisfaction, and continuous improvement.

Select a sales training program that addresses the needs. Decide on evaluation criteria: How will we know we have succeeded? Do we have measures already in place or do we need to develop or identify new ones?

The program begins, but it is still fluid; are we reaching those salespeople who need the training? Changes can still be made.

Formative, because ongoing feedback helps the trainers correct any problems.

The person charged with performing a program evaluation will find it helpful to think of evaluation in terms of a format. You should conduct an evaluation with the evaluation design format in mind, allowing it to guide you through the steps to a conclusion. You can use the format shown in Exhibit 1.1 in conjunction with the information in this chapter to plan and conduct any program or project evaluation.

Strategically, your work on developing the format should begin at the last section of the exhibit: audience. Your first task—not your last— is to ask who is interested in the results of this evaluation. The term is placed last on the chart because it is to the audience—the funding source, the management, the team facilitator, or other audience—that an evaluator delivers the evaluation report. Determining the audience, or stakeholders, is an essential step. Not only does this step help an

Exhibit 1.1. Evaluation Design Format.

Project _

Focus (formative, summative, or both)

Evaluation Question

Activities to Observe

Data Source

Population Sample Design

Data Collection

Responsibility

Data Analysis

Audience

evaluator focus the evaluation activities but it also gives an early view of how the results of the evaluation might, or might not, be used.

In XYZ Corporation the audience for the evaluation might be the sales team facilitator, who needs to know how the team members are progressing toward the team's goals. However, the audience might also be the XYZ human resource director, who wishes information on the effectiveness of the newest customer service training. Or the audience might be the XYZ plant manager, who needs to consider the impact the sales force has had on the marketing of the latest product line. Given these three examples, the evaluator might (1) focus the report on suggestions for program improvement as the teams are meeting, or (2) include knowledge gains from pretesting to posttesting, or (3) track and link six-month changes in sales to training. Each of these possibilities suggests different evaluation questions, data sources, subjects, data collection strategies, data analysis techniques, and interpretations.

These are the essential parts of the design format:

Evaluation questions are those that your audience needs to have answered in order to make cogent decisions. Examples of evaluation questions might be these: Did those who participated in the first quarter training program perform significantly better in the second quarter than those who did not? Did team leaders evaluate trainees' time management skills as being satisfactory? Did the overall number of face-to-face meetings with customers increase over the three-month period? These decisions might come during the program cycle (for a formative evaluation) or at the end of the program cycle (for a summative evaluation).

Activities are those program activities that will result in accomplishing the program objectives. The objectives may already exist as statements used to communicate to staff, clients, sponsors, and the powers that be the intended outcomes (accomplishments) of the program. Some examples include helping the sales force recognize cues to customer satisfaction or dissatisfaction, teaching the sales force how to apply available technology to time management, and teaching the sales force to turn face-to-face meetings into sales.

Whether or not the objectives have been stated, the desired activities still need to be stipulated so that the evaluator can make at least some causal connections between what the program did and what resulted. For example, trainees attend sessions in which they are taught how to use the company's available technology to enhance their time management skills. The sales force meets with the trainer to discuss customer cues and to role-play responses to those cues.

Data sources can be both existing records the evaluator can examine for data and the instruments that will be used to collect new data. At XYZ Corporation, the sales records prior to training and after training could become data sources. Others include individual achievement records of the sales force before and after training. Attitude surveys might be administered to internal customers, such as the sales force, as well as to external customers. Pretests and posttests may be administered to measure growth in time management and technology concepts and skills.

Population sample identifies those individuals from whom or about whom the data will be collected. In the case of XYZ, the individuals from whom or about whom the data will be collected are the sales force as well as their customers. There are 18 people in the sales force sample and 162 customers in the external sample.

Data collection design illustrates the context and schedule for the data collection. At XYZ, some data might be collected from pretests and posttests, that is, collected prior to the training initiative and following it. Data from existing records could be assembled before the training. Attitude scales would be administered to customers prior to the sales force training and again six months after the training.

Responsibility delineates who will have the responsibility to perform each evaluation activity.

Data analysis outlines how the evaluator will analyze and interpret the data after collection.

As you progress through The ABCs of Evaluation, you will learn more about each of the sections of the evaluation design. For now, just read through the design that follows to become familiar with how the process works under a specific circumstance. Later, you will have an opportunity to use the evaluation design format to set up your own evaluation.

Exhibit 1.2 shows the steps in the formulation of an evaluation design that resulted from a human resource department's (HRD's) need to evaluate a professional development program already in existence. The course, taken by all new hires, was called Technology Training for Administrative Assistants. Note how the evaluation questions spring

Exhibit 1.2. Evaluation Design: Administrative Assistants Technology Training.

Evaluation Design: Administrative Assistants Technology Training.

 
< Prev   CONTENTS   Next >