Internal validity is the basic minimum without which any experiment is un-interpretable: Did in fact the experimental treatments make a difference in
the experimental instance?
—Campbell and Stanley (1963)
As stressed throughout this book, a main goal of behavioral intervention research is to develop interventions that effectively address identifiable and documented public health issues, policy gaps, health disparities, or care approaches (Chapter 1). Behavioral intervention research is challenging, and the design, evaluation, and implementation of behavioral interventions require a systematic and well-documented approach. A systematic approach is required to establish confidence that an intervention has an impact on outcomes of interest. It also facilitates the ability of other researchers to replicate the intervention and, ultimately, the implementation of the intervention in community and clinical settings. Reports of intervention research trials should explain the methods used in the trial, including the design, delivery, recruitment processes, components of the invention, and the context (see Chapter 24). Unfortunately, in many instances, behavioral intervention researchers fall short in the reporting of study design and methods, which limits use of research findings, diminishes optimal implementation of intervention programs, wastes resources, and fails to meet ethical obligations to research participants and consumers (Mayo-Wilson et al., 2013). Often this is due to the lack of detailed protocols for, and documentation of, intervention activities.
Generally, as discussed in Chapter 14, internal validity refers to the reliability/ accuracy of the results of a research trial—the extent to which the changes in outcome measures or differences between treatment groups can be confidently attributed to the intervention as opposed to extraneous variables (Kazdin, 1994). There are numerous factors that impact on internal validity, including historical events, maturation, repeated assessments, bias, and confounding factors. History or external events experienced differentially by groups can pose a threat to internal validity. For example, if an investigator is interested in evaluating a multisite workplace intervention to promote healthy eating and, during the course of the trial, some of the participating workplace sites change the food offerings at their cafeterias, it would be difficult at the end of the trial to attribute any changes in the eating habits of employees to the intervention. In this case, the event is out of the control of the investigator. However, the manner in which a trial is conducted is under the control of the investigator; as such, the investigator must track and document these types of events. Poorly conducted intervention trials increase the likelihood of confounding variables and biases, which in turn threaten the internal validity of the study.
In this chapter, we discuss issues related to the standardization of behavioral intervention research studies and address topics such as aspects of the trial that need to be standardized, tailoring, and strategies to enhance standardization such as manuals of operation (MOP) and training of research team members. Our intent is to highlight the importance of taking steps to ensure that research activities at all stages of the pipeline are of the highest quality, so that potential benefits of the intervention are maximized and potential threats to internal validity are minimized. Developing strategies for standardizing research protocols and methodologies is important at all stages of the pipeline. This is the case for the beginning stages of developing an intervention, which might involve observational studies, as well as for later stages, which generally involve experimental designs (Chapter 2). Standardization is essential to maintaining treatment integrity as discussed in Chapter 12. Finally, it is our experience that having detailed protocols and manuals for intervention activities (regardless of stage of its development) greatly facilitates the reporting of the results and the development of manuscripts. In fact, most refereed journals require that intervention trials adhere to the Consort Standards (Schulz, Altman, & Moher, 2010) for reporting randomized trials (see Chapter 24).