The Importance of Theory
In their broad review of prevention efforts in psychology, Nation and colleagues (2003) argued that being theory-driven is one of nine core characteristics of effective programs, a perspective also endorsed by the American Psychological Association (2014). A theoretical model provides many things. It identifies the behaviors and attitudes that we are trying to modify or prevent. The theory also specifies the influences that we want to affect with our programs so as to prevent the outcome. Importantly, the theory delineates the causal relationships among the variables. These proposed relationships allow us to identify clear, testable hypotheses. This feature of a good theory is important for several reasons.
First, testing specific hypotheses helps to keep the research focused. We analyze and evaluate hypotheses rather than engaging in exploratory data analyses with ambiguous implications (Munafô et al., 2017). While exploratory data analysis certainly has its place, prevention scientists are primarily engaged in evaluating a specific program that is hypothesized to work in a specific way (Gottfredson et al., 2015). We need to avoid data dredging (also known as P-hacking), which can increase the likelihood of spurious results. Minimizing P-hacking increases the chances the findings can be reproduced by other researchers. This will be critical in validating and eventually disseminating the program.
Identifying the theoretical causal relationships is important to dissemination efforts for another reason. Let’s say a program focuses on changing the interpretation of thin ideal media messages in order to decrease body image concerns. What if a particular population has minimal exposure to such messages, as may be true for some groups of boys? Knowing the causal relationships that are interrupted by a particular program can help consumers select programs that are particularly appropriate for their audiences (Cooper et al., 2015; Stice, Becker, & Yokum, 2013). It can also be a focus of the content modification necessary to fit a particular sample or population (Becker, 2017). For example, program content could be modified to emphasize messages about muscular ideals rather than exclusively thin ideals. This would make the program more generalizable. Theory helps us to consider the important issues of who might ultimately be served by the program. Such considerations are crucial to effectively disseminate programs (Spoth et al., 2013).
Few prevention programs have taken the step of testing the components of the underlying theory. Stice’s extensive research with the Body Project (Chapter 7) provides a template for this process. Stice and his colleagues have demonstrated that at least three of the mediating variables (thin ideal internalization, denial of costs of the pursuit of the thin ideal, level of dissonance induction) proposed in their model are affected by the Body Project program and, in turn, affect the outcome variable (Stice, Becker, & Yokum, 2013). Stice, Yokum, and Waters (2015) also demonstrated that, as predicted, the responsiveness of specific brain areas (e.g.. the caudate) involved in reward valuation is reduced by participating in the program (Chapter 10). This type of research is crucial for the future development of effective programs and dissemination.
Theoretical models should also identify moderator variables by, for example, specifying whether there are specific groups to whom the model applies. These groups may be defined by age, gender, risk status, ethnicity, or other characteristics. Moderators may also indicate particular settings (e.g., school or athletic team practice) or times (e.g., prior to lunch) that might affect the program’s influence. Identifying moderators during the planning stages of the project will ultimately improve dissemination efforts later in the process (Gottfredson et al., 2015; Klesges, Estabrooks, Dzewaltowski, Bull, & Glasgow, 2005).
Data Analysis
The specific analytic techniques used to evaluate the proposed effects of a program or programs being investigated in a particular study depend on at least three factors. First, analysis must be guided by the hypotheses derived from the theoretical model. Second, there is the issue of whether the data are qualitative or quantitative. Finally, missing data will often present a challenge. We have already discussed the first issue. Using specific hypotheses to determine data analyses helps to avoid spurious results and an incorrect conclusion that the program works, that is, a type I error (Munafo et al., 2017).
Most ED prevention research uses inferential statistics and an experimental or quasi-experimental design. These designs will be discussed in more detail later in the consideration of efficacy and effectiveness trials. There are situations in which qualitative approaches may be particularly usefill. For example, program designers might pilot-test content with teachers or parents or students and then ask participants to comment on the specifics of the content, rather than just having them complete a rating scale. If the analysis of the narratives concerning content is intended to be exploratory and is not guided by hypotheses, then a grounded theory approach might be used (Charmaz, 2014). If there are hypotheses, then data analysis might, for example, be quasi-statistical or a content analysis. Analyzing the appropriateness of program content is valuable not only in the initial development of the program but also in preparing to implement the program with new populations. Evaluating and modifying content for new groups of stakeholders is an important step in disseminating a program (Becker, 2017; Chapters 7 and 19).
Missing Data
Randomness
In any study with multiple times of measurement, as will be the case in efficacy and effectiveness trials, there may be sample attrition, resulting in missing data. Before selecting a method for dealing with missing data, the researcher must first evaluate whether the data are missing at random. If the missing scores are not systematically related to any variables, then they are missing completely at random (MCAR). This is the most desirable state of affairs. Data are considered missing at random (MAR) if the missing scores are related to a variable in the study but not to the missing data itself. Data are not missing at random (NMAR) if the missing scores are directly related to the missing data. For example, if people with higher body weights for height have tended to leave body weight “blank,” then the missing weight scores are disproportionately related to high weight, so the data are NMAR. On the other hand, if missing weight values are related only to ethnicity, the data are considered MAR.
Little’s (1988) test can be used to evaluate whether data are MCAR. Nevertheless, many researchers will attempt to uncover patterns in attrition by comparing scores at pretest with follow-up scores. These tests are not definitive, but can they provide valuable information. If it can be established that the data are MCAR or MAR and less than about 5% of the data are missing, researchers could use listwise deletion of participants with missing data, an approach known as complete case analysis (Jakobsen & Gluud, 2017). This, of course, reduces the sample size and therefore power. It may also introduce some undiagnosed systematic error into the data set. Alternatively, researchers could use imputation techniques to replace the missing data (Jakobsen & Gluud, 2017). But among ED prevention researchers, the preferred solution to missing data is the intent(ion)-to-treat approach (Gupta, 2011; Ten Have et al., 2008).
Intent to Treat
In this technique every research participant is included in the analyses, even though some participants did not follow the full protocol. So, for example, they may have attended only 10 of the 12 program sessions, or they may have withdrawn from the program. The intent-to-treat approach opts to ignore any deviation that occurs after participants are randomized to experimental conditions. Such an approach acknowledges that research is an imperfect enterprise. This “real-life” perspective in the implementation of programs is perhaps the primary advantage of the intent-to-treat approach to data analysis (Gupta, 2011). It is worth noting, however, that this approach does not guarantee generalizability of the results to the population. It is possible that a different sample of the same population would show different patterns of noncompliance or withdrawal from their assigned protocol (Ten Have et al., 2008).
The primary disadvantage of an intent-to-treat analysis is that it includes people who did not follow the protocol in the same group as those who did. This may result in a weakening of the actual treatment effects (Gupta, 2011) and hence may lead to an incorrect conclusion that the program does not work, that is, to a type II error. Consequently, rather than this “as assigned” approach to deciding which data to include, one could opt for an “as delivered or as received” approach (Ten Have et al., 2008, p. 772). For example, the data of a subset of participants who remained in their assigned group throughout the treatment, participated in all sessions, and completed all measurements would be analyzed. These per-protocol approaches are typically used as a secondary analysis, following an intent-to-treat analysis. If the intent to treat and the per-protocol analyses produce similar results, confidence in the findings about the impact of the program is increased. A variety of non-intent to treat approaches can be used as secondaiy analyses (Gupta, 2011; Ten Have et al., 2008).