Establishing Causality

Correlation does not imply causation even though this mistake continues to be repeatedly made in the discussion section of some published studies (William, Shadish, Cook, & Campbell, 2002). To illustrate, there is a strong association between the number of earthworms on a road in a certain county in Florida and the number of automobile accidents. Let’s hypothetically assert that the strength of association is .6 (p < .001). This means that the number of earthworms could account for 36% of the variability in automobile accidents on the road. Does this mean that the earthworms on the road caused the automobile accidents? Certainly not! It turns out that rain brings out the earthworms and also makes the roads slick. Thus, a third unaccounted for factor (rain) is the causal mechanism underlying the association between the other two variables. Confusing association with causation has resulted in a number of faulty scientific conclusions that were rectified by further research. For example, it was once thought that aluminum caused Alzheimer’s disease (AD) since there were high levels of aluminum in the plaques found in the brains of AD patients upon autopsy. However, aluminum was not the cause of the disease but a result of other pathological processes that occurred in the brain after the disease started. The beating of drums after an eclipse is always associated with the sun coming out. However, this association is spurious and certainly not causal.

In experimental design when testing a behavioral intervention, the way in which causality is assessed is by randomly assigning participants to different experimental and control groups (Imai, Tingley, & Yamamoto, 2013). It is assumed that random assignment will minimize group differences and, if other variables are held constant or carefully controlled or accounted for, the temporal change among groups may be attributable to the experimental condition to which a participant was assigned. In pharmacological studies, it is easy to use double-blind procedures in which both the experimenter and the subject cannot differentiate the active drug from the placebo. However, in behavioral intervention research, experimenters conducting interventions and participants are often not blind to the condition to which they are assigned. As a result, it is imperative for independent raters who are blind to condition to obtain baseline and outcome measures (referred to as a “single-blind trial”). Issues such as expectancy effects of the participant and experimenter, or disappointment that one may feel when assigned to an “inactive group,” make it imperative to design evaluations of behavioral interventions with control groups that can be equated to the intervention in terms of the interventionist’s time and attention (see Chapter 8 for a discussion on selecting control groups). Although this is not feasible in all types of studies, and particularly for early tests of the intervention for proof of concept, failure to have adequate control groups raises the question as to whether obtained results were due to the active ingredients of the intervention or merely nonspecific aspects of the treatment that differed from the control condition.

 
Source
< Prev   CONTENTS   Source   Next >