The Classic Two-Group Pretest-Posttest Design with Random Assignment
Figure 4.1a shows the classic experimental design: the two-group pretest-posttest with random assignment. From a population of potential participants, some participants have
The classic design: Two-group pretest-posttest.
been assigned randomly to a treatment group and a control group. Read across the top row of the figure. An observation (measurement) of some dependent variable or variables is made at time 1 on the members of group 1. That is Oj. Then an intervention is made (the group is exposed to some treatment, X). Then, another observation is made at time 2. That is O2.
Now look at the second row of figure 4.1a. A second group of people are observed, also at time 1. Measurements are made of the same dependent variable(s) that were made for the first group. The observation is labeled O3. There is no X on this row, which means that no intervention is made on this group of people. They remain unexposed to the treatment or intervention in the experiment. Later, at time 2, after the first group has been exposed to the intervention, the second group is observed again. That’s O4.
Random assignment of participants ensures equivalent groups, and the second group, without the intervention, ensures that several threats to internal validity are taken care of. Most importantly, you can tell how often (how many times out of a hundred, for example) any differences between the pretest and posttest scores for the first group might have occurred anyway, even if the intervention hadn’t taken place.
Patricia Chapman (Chapman et al. 1997) wanted to educate young female athletes about sports nutrition. She and her colleagues worked with an eight-team girl’s high school softball league in California. There were nine 14-18 years olds on each team, and Chapman et al. assigned each of the 72 players randomly to one of two groups. In the treatment group, the girls got two, 45-minute lectures a week for 6 weeks about dehydration, weight loss, vitamin and mineral supplements, energy sources, and so on. The control group got no instruction.
Before the program started, Chapman et al. asked each participant to complete the Nutrition Knowledge and Attitude Questionnaire (Werblow et al. 1978) and to list the foods they’d eaten in the previous 24 hours. The nutrition knowledge-attitude test and the 24-hour dietary recall test were the pretests in this experiment. Six weeks later, when the program was over, Chapman et al. gave the participants the same two tests. These were the posttests. By comparing the data from the pretests and the posttests, Chapman et al. hoped to test whether the nutrition education program had made a difference.
The education intervention did make a difference—in knowledge, but not in reported behavior. Both groups scored about the same in the pretest on knowledge and attitudes about nutrition, but the girls who went through the lecture series scored about 18 points more (out of 200 possible points) in the posttest than did those in the control group.
However, the program had no effect on what the girls reported eating. After 6 weeks of lectures, the girls in the treatment group reported consuming 1,892 calories in the previous 24 hours, while the girls in the control group reported 1,793 calories. A statistical dead heat. This is not nearly enough for young female athletes, and the results confirmed for Chapman what other studies had already shown—that for many adolescent females, the attraction of competitive sports is the possibility of losing weight.
This classic experimental design is used widely to evaluate educational programs. Kunovich and Rashid (1992) used this design to test their program for training freshman dental students in how to handle a mirror in a patient’s mouth (think about it; it’s not easy—everything you see is backward) (Further Reading: the classic experimental design and evaluation research).