Phase II—Initial Comparison With a Control Group

Phase II involves an initial pilot test or series of pilot tests of the intervention that is conducted typically in comparison with an appropriate alternative. In this phase, a small pilot randomized trial (e.g., sample size of 20-60 participants) can be used to identify or refine appropriate outcomes and their measurement (see Chapters 14 and 15), evaluate whether measures are sensitive to expected changes from an intervention, determine the type of control group (see Chapter 8), and evaluate the potential treatment effects.

Also in Phase II, monitoring feasibility, acceptability, and safety may continue along with continued evaluation of whether and how the theoretical base informs observed changes. Another important task that can begin in Phases I or II is the evaluation of ways to evaluate treatment fidelity (see Chapter 12). Specifically, in these early phases, it is helpful to begin to think through a monitoring plan and identify measures to capture the extent to which intervention groups (e.g., treatment and control group conditions) are implemented as intended. Thus, pilot and feasibility studies in this phase can be used to evaluate a wide range of aspects for a larger study such as the feasibility of all procedures and design elements including recruitment, retention, and assessments or outcome measures, in addition to evaluating intervention components, dosing, and other delivery characteristics. This phase can yield: the preliminary evidence that the intervention has its desired effects; a clearer understanding of theoretical framework(s) that can inform the intervention; information about appropriate control groups; a well-defined treatment manual specifying delivery characteristics; the most appropriate outcome measures (see Chapters 14 and 15); and, finally, inform design considerations for a more definitive Phase III efficacy trial.

Although there is no doubt that conducting pilot and feasibility studies in both Phases I and II is critical for mapping larger scale studies of behavioral interventions, their methodological rigor and yield have come under increasing scrutiny. For example, whereas previously, pilot studies were often used to determine effect

TABLE 2.1 Basic Study Designs for Use Across the Pipeline



Adaptive designs

Adaptive interventions, also known as "adaptive treatment strategies" or "dynamic treatment regimens," involve individually tailored treatments based on participants' characteristics or clinical presentations, and are adjusted over time in response to persons' needs. The approach reflects clinical practice in that dosages vary in response to participant needs. These designs used prespecified decision rules based on a set of key characteristics (referred to as "tailoring variables") (Collins et al., 2004; Lei et al., 2012; The Methodology Center, Penn State University, 2012).



Case-control designs are typically retrospective. Individuals with an outcome of interest (e.g., some disease or condition) are selected and compared with individuals (matched on relevant characteristics) without the outcome or disease in an attempt to find the risk factors present in the cases but not in the controls. The goal is to understand the relative importance of a predictor variable in relation to the outcome (Kazdin, 1994; Mann, 2003).


longitudinal designs

These designs are often used to determine the incidence and history of a disease or condition, where a cohort is a group of people who have something in common and remain part of a group over an extended period of time (e.g., age group, people who have been exposed to some environmental condition or received a particular treatment). Cohort designs may be prospective or retrospective. In prospective designs, the individuals are followed for a certain period of time to determine whether they develop an outcome of interest. The investigator then measures variables that might be related to the outcome. For example, a cohort of individuals of the same age is followed longitudinally to determine whether they develop Alzheimer's disease. In retrospective designs, the cohort is examined retrospectively; the data already exist (Dawson &

Trapp, 2004; Mann, 2003).

Cross-over designs

This design includes two groups—treatment and control. Initially, one group of individuals is assigned to the treatment group, and another group is assigned to the control (typically with random assignment). After a period of time, both groups of individuals are withdrawn from their original group for what is referred to as a "washout period," in which no treatments are administered. Following the "washout period," individuals initially assigned to the control group receive the treatment, and those who originally received the treatment are assigned to the control condition (Dawson & Trapp, 2004).



These designs are primarily used to understand the prevalence of an outcome (e.g., disease or condition). A group of individuals is selected at one point of time rather than over a time period, and data on these individuals relevant to a particular outcome are analyzed. All measurements on an individual are made at one point in time to determine whether he or she has the outcome of interest (Dawson & Trapp, 2004; Mann, 2003).

TABLE 2.1 Basic Study Designs for Use Across the Pipeline (Continued)



Factorial designs

Factorial designs allow the investigation of the impact of more than one independent variable on an outcome measure(s) of interest. The independent variables are examined at different levels. An example of the 2 x 2 design is where there are two i ndependent variables (e.g., intervention A and intervention B) each at two levels (e.g., dosage— high vs. low). In this case there are four groups, which represent each possible combination of the levels of the two factors. These designs allow for the assessment of the main effect of each variable and the interaction among the variables (Kazdin, 1994).

Hybrid designs

These study designs combine specific questions related to effectiveness and implementation, and reflect a dual testing approach determined a priori of implementing a study. Hybrid designs typically take one of three approaches: (a) testing effects of an intervention on outcomes while gathering information on implementation; (b) testing of clinical and implementation interventions/strategies; (c) testing of an implementation strategy while also evaluating impact on relevant outcomes (Bernet et al., 2013; Cully et al., 2012; Curran et al., 2012).


Meta-analysis is a quantitative synthesis of information from previous research studies to derive conclusions about a particular topic; it summarizes findings from a large number of studies. For example, several meta-analyses have been conducted of the caregiver intervention literature. By combining relevant evidence from many studies, statistical power is increased, and more precise estimates of treatment effects may be obtained (Trikalinos, Salanti, Zintzaras, & loannidis, 2008).

Pretest-posttest control group designs

Pretest-posttest control group designs are commonly used in intervention research, especially at the efficacy stage of the pipeline. This design consists of a minimum of two groups where participants are evaluated on outcome measures before and after the intervention. Thus, the impact of the intervention is reflected in the amount of change from pre- to postintervention assessment. Individuals are typically randomly assigned to groups (Kazdin, 1994).

Randomized control trial designs

Randomized control trials (RCTs) are considered to be the "gold standard" for evaluating the efficacy or the effectiveness of an intervention. In an RCT, after recruitment, screening, and baseline assessment, participants are randomly assigned to a condition (e.g., alternative interventions/treatments or intervention/ treatment and control). Following randomization, the groups are treated and followed in the same way (e.g., assessment protocols)—the only difference is the treatment/intervention that they receive. Typically, a primary end point or outcome measure is identified prior to the beginning of the trial, and the trial is registered (e.g., clinical (Concato, Shah, & Horwitz, 2000).

TABLE 2.1 Basic Study Designs for Use Across the Pipeline (Continued)



Randomized block designs

The randomized block design, similar to stratified sampling is used to reduce variance in the data. Using this design, the sample is divided into homogeneous subgroups (e.g., gender), and then the individuals within the blocks are randomly assigned to a treatment/ intervention condition or treatment/intervention and control condition (Bailey, 2004).

Single-case designs

In single-case design, an individual serves as his or her own control. In these cases, an individual is assessed prior to the treatment or intervention and then repeatedly over the course of the treatment. Repeated assessments are typically taken before the treatment is administered for a period of time, "the baseline phase," which allows the investigator to examine the stability of performance on some outcome. The treatment/intervention is then administered, and performance on the outcome is assessed during the course and after the treatment/intervention (Kazdin, 1994).

SMART designs

"Sequential Multiple Assignment Randomized Trials (SMART)" is an approach to inform the development of an adaptive intervention. A SMART enables an evaluation of the timing, sequencing, and adaptive selection of treatments in a systematic approach and use of randomized data. Participants may move through various stages of treatment, with each stage reflecting a documented decision or set of decision rules. Participants are randomized at each stage in which a treatment decision is made. Thus, participants move through multiple stages, which allows the investigator to make causal inferences concerning effectiveness of various intervention options (Almirall et al., 2012; Lei et al., 2012; Murphy, 2005).

Wait-list control designs

Using this design, participants are randomly assigned to either the treatment/intervention group or the wait-list group, which receives the treatment at a later date. The wait-list group is used as a control group. Typically, pre-post intervention data are gathered from both groups (Hart, Fann, & Novack, 2008).

sizes for a larger trial, research has shown that estimates may overestimate outcomes due to the inexactitude of data from small samples. Furthermore, feasibility results may not generalize beyond the inclusion and exclusion criteria of a pilot (Arain, Campbell, Cooper, & Lancaster, 2010). There is also confusion in the literature as to what constitutes a “pilot” versus a “feasibility” study, and what methodologies are most appropriate for each (Leon, Davis, & Kraemer, 2011; Thabane et al., 2010).

No guidelines have been agreed upon for pilot or feasibility studies or whether and how they should be distinguished. Arain and colleagues (2010) suggest that feasibility studies are typically conducted with more flexible methodologies and that results may not be generalizable beyond the sample inclusion criteria. Alternately, they suggest that pilot studies tend to incorporate more rigorous design elements and should be viewed as a necessary step prior to a larger scale test of an intervention. Regardless of conceptual confusion in the literature, at the very least, for feasibility studies, investigators should clearly state how feasibility will be defined and operationalized; and for pilots, the specific purpose(s) should be clearly articulated. There is also no doubt that feasibility and pilot tests are necessary endeavors prior to moving forward with larger scale evaluations of behavioral interventions.

< Prev   CONTENTS   Source   Next >