In the previous chapter Logic Models were discussed as a means of obtaining a snapshot of the flow of a project. One of the key ingredients of both the Logic Model and the Evaluator's Program Description (EPD) is a clear understanding of the objectives of the project, or what the project is planning to accomplish. These objectives are usually stated in a manner that indicates Who is to be affected (Client), What will happen (Activity), When it will result (Time line), and How one will know it was accomplished (Measure). If the evaluator knows the "Who," this identifies the Population or Sample from whom data are to be collected. The "How" informs the evaluator what he or she needs to observe to better understand how the causal link is made between the intent and the result. Understanding "When" results are anticipated will assist the evaluator in creating the Data Collection Plan or when data sources are to be administered—Pre, Interim, or Post project operation. Finally, the "How" informs the evaluator of the Data Sources the project staff have in place (or expect to have in place) that will measure the extent to which the objective was achieved. Many project descriptions include objectives. Sponsors usually require elaborate management plans that include detailed objectives to be accomplished. However, that is not always the reality. Often, objectives exist only as statements of what a project will perform, such as to conduct five workshops, to train four hundred machinists, or to counsel three hundred couples on family planning. Such objectives include the Who, the What, and sometimes the When, but usually do not include the How. Project directors usually expect that this is the evaluator's "job": to identify or develop appropriate measures where none exist.
Regardless of whether the objectives contain all components or less than all, it is primarily the evaluator's responsibility to recommend how data will be analyzed so that stakeholders understand the extent to which objectives have been achieved. A trained evaluator has at his or her disposal a plethora of analyses that could be used to transform data that are collected across many clients, staff, processes, or occurrences, into a form that is interpreted into a form that is understandable to an untrained stakeholder.
This chapter will address a number of the more common statistical analyses used in an evaluation. The specific analyses selected will be determined by many factors, including number of participants, level of data, type of measure (Qualitative or Quantitative), type of design (pretest, posttest, or both), or evaluation model. The evaluator, working with the project staff, will determine the most appropriate method of data analysis for each objective, given the above factors (and others). Let's begin by discussing data.
The term data usually conjures the image of numerical figures that are presented in a table or graph, after having been manipulated using some complex formula. Although that image in fact is accurate, data can also take other forms. Responses to interview questions (narrative), survey items (scales), and observations (narrative or counts) are all considered data. As with numbers, once these data are collected they need to be analyzed systematically and objectively in order to yield meaningful information for the program evaluation. The data can be analyzed using a variety of different data analysis (statistical) procedures.
In a face-to-face interview, for example, the evaluator is the data collector, using an interview schedule (instrument). The evaluator asks a question, immediately analyzes the response, and if necessary redirects the interviewee to another topic or asks for a fuller explanation. Careful notes are kept from which information is culled that is pertinent to the evaluation questions. This is a process called "coding" (Babbie, 1989). There are now computer programs that do the coding for qualitative evaluation.
Observations can take place unobtrusively, as the evaluator gathers information, for example, about the setting of a program or about the interactions among people (Wholey, Hatry, and Newcomer, 1994; 2004). Information can be recorded in two ways: The evaluator can record careful, detailed notes on the observations, which will later be coded. Or, if there are specific items or attributes that the evaluator needs to track in order to answer the evaluation question, then a list or guide might be developed beforehand to be checked off during the observations. This becomes, in effect, a rating scale.
In other programs observations may be conducted obtrusively, as described in Chapter Six; the evaluator may even become one of the participants.
The prospect of having to do statistical work can produce sweaty palms for many of us. But here is the good news: you no longer need to know how to crunch numbers to produce valuable statistics. There are simple computer programs that can perform that task, easily and affordably. (See the Software and Further Reading lists at the end of the chapter.) One need only be aware of a few basic concepts about the analyzed data.