Phase IV—Effectiveness

In the traditional pipeline model, Phase IV is considered the final phase. Following a demonstration of efficacy in Phase III, Phase IV represents an effectiveness or replication trial to evaluate whether the intervention has an impact when delivered to a broader group of study participants than those included in the efficacy phase and/or within a particular practice or service context than those previously considered. Whereas Phase III methodological efforts are directed at ensuring internal validity, as already mentioned, the emphasis in Phase IV is on external validity or the extent to which the intervention can have a broader reach and be generalized to more heterogeneous samples and environmental contexts. Although internal validity remains important, external validity is the primary focus. As such, inclusion and exclusion criteria may be relaxed, or opened up, to include a broader mix of study participants reflecting real clinical populations. Similarly, small tweaks in intervention protocols such as the number or duration of sessions and/or who can deliver the intervention may occur in order to meet the expectations of different targeted populations and settings.

Balancing the need to maintain fidelity (refer to Chapter 12) yet accommodate an implementation context in an effectiveness phase can be challenging. If the intervention is changed too much, it may not result in the same level of benefits or type of outcomes achieved in the efficacy phase. However, if no adaptations are made, then there is the risk that the intervention will not be replicated in the designated setting (Washington et al., 2014). This is the essential challenge of this

TABLE 2.2 Examples of Trial Designs

Term

Definition

Comparative Effectiveness Research (CER)

CER is a rigorous evaluation of the effects of different treatments, interventions, or programs. The approach provides a comparison of the benefits and harms of alternative treatments. Its purpose is to inform decision making as to which treatments to use at the individual and population levels (Conway & Clancy, 2009; Congressional Budget Office, 2007; Sox & Greenfield, 2009).

Effectiveness trial

A Phase IV trial is concerned mostly with the external validity or generalizability of an intervention. In these trials, samples tend to be more heterogeneous than in efficacy trials to reflect real-world, clinical populations. Additionally, these trials usually include a broader range of outcomes such as quality of life and cost. The essential question that is being tested is whether a treatment or intervention does more good than harm when delivered under real-world conditions (Curran et al., 2012; Flay, 1986; Glasgow et al., 2003).

Efficacy trial

A Phase III trial (explanatory) determines whether an intervention has a desired outcome under ideal or optimum conditions. They are characterized by their standardization and strong methodological control features (Flay, 1986; Gartlehner et al., 2006; Glasgow et al., 2003).

Embedded trial

Also referred to as "practical trials," interventions are embedded in a setting or context in which they will be delivered in order to understand their effects in relation to other contextual factors that may or may not be manipulated (Allotey et al., 2008). This approach typically combines efficacy and effectiveness or effectiveness and implementation-type questions (Glasgow et al., 2005, 2007; Tunis et al., 2003;).

Equivalence trial

Equivalence trials, also referred to as "noninferiority trials," seek to determine whether a new intervention is similar (or not) to another, usually an existing treatment or standard of care. The aim may be to show the new intervention is equivalent to (or not inferior to) an established intervention or practice versus being better than that treatment (Christensen, 2007; Piaggio et al., 2006, 2012; Sedgwick, 2013).

Pragmatic trial

Pragmatic trials measure primarily the effectiveness or the benefit of a new intervention to routine care or clinical practice. It is similar to an embedded trial, described above, in that an intervention is rigorously tested in the context in which it will be delivered and is designed to inform decision making between a new and an existing treatment (Glasgow, 2013; Patsopoulos, 2011; Roland & Torgerson, 1998).

Superiority trial

A superiority trial is designed to show that a new intervention is statistically and clinically superior to an active control or an established therapy or a placebo (Christensen, 2007; D'Agostino et al., 2003; Landow, 2000).

phase. Determining whether changes result in a new intervention that needs further testing is critical yet subjective; there are no common metrics or approaches that can be uniformly applied. Traditionally, this decision has been in the hands of the originator of the intervention or investigative team.

Case Example: An example of the need for this balancing act when moving from efficacy to effectiveness is the National Institutes of Health REACH II (Resources for Enhancing Alzheimer’s Caregiver Health) initiative. The REACH

  • 11 intervention was tested in a Phase III efficacy trial involving five sites and 642 African American, Latino/Hispanic, and Caucasian caregivers of persons with dementia (Belle et al., 2006). The intervention that was tested involved
  • 12 sessions (nine in-home and three telephone sessions, and five structured telephone support group sessions were provided). Participants received resource notebooks, educational materials, and telephones with display screens linked to a computer-integrated telephone system to provide information and facilitate group support conference calling. Fidelity was carefully maintained across all sites through various strategies and measurement approaches. Because of the positive outcomes of the trial, particularly for Hispanic/Latino caregivers, there has been considerable interest in evaluating whether this intervention can achieve similar benefits when integrated in different delivery contexts such as the Veterans Administration and social service agencies, as well as for other minority groups. However, moving the intervention from an efficacy trial to an effectiveness context has called for making various compromises. For example, it was not economically feasible to replicate the computer- integrated telephone system in other settings; it was also not feasible to conduct telephone support groups with families; nor was it feasible for busy social service practices to implement all 12 sessions (Burgio et al., 2009; Nichols, Martindale-Adams, Burns, Graney, & Zuber, 2011). The intervention as initially designed and tested in its efficacy phase could not easily fit with the work flow of existing social service agencies.
 
Source
< Prev   CONTENTS   Source   Next >