How Much Evidence Is Sufficient?

These cases also implicitly raise the issue: What constitutes sufficient evidence for an intervention/program/service model to warrant moving forward with its implementation? There is no consensus as to how much evidence is enough for implementation to be considered. In drug studies, multiple tests for safety, efficacy, and delineating underlying mechanisms are required prior to U.S. Federal Drug Administration (FDA) approval and adoption. Once FDA approval has been achieved, there is a tendency not to require more trials for the drug for the indication for which it was approved.

There are, however, no parallel pipeline and approval mechanisms for behavioral interventions. Most stakeholders of practice settings are seeking at a minimum the following forms of evidence: benefit for a targeted population; some understanding as for whom the intervention is most effective; and its cost, cost-benefit and/or cost-effectiveness. As behavioral interventions address many important and pressing needs with few adverse events, adoption may occur on the basis of findings from only a single randomized trial. Nevertheless, while ongoing evaluation of an intervention in new contexts is always warranted, the question is the rigor and design of such evaluations; that is, can the intervention be evaluated within, for example, a quality indicator framework or is a new RCT required? Take for example GBGB; the essential question is whether we need to wait another 3 to 5 years to complete a randomized trial that evaluates the intervention’s outcomes for low-income minority populations not included in the original trial. Alternately, GBGB could be implemented with populations that loosely fit the criteria of the original trial (e.g., low income, having depressive symptoms, amenable to an activity-oriented behavioral approach) and evaluated within the context of its delivery. Although we favor the latter approach, this is an issue that needs careful consideration, empirical evidence regarding cultural adaptations to interventions to guide decision making. It remains a hotly debated issue among funders and researchers.

A related question is how intervention programs should be evaluated in order to improve their implementation potential. Chapter 2 discussed the potential role of hybrid and pragmatic designs as one way of rigorously addressing efficacy-, effectiveness-, and/or implementation-type questions in a trial or multisite study. As discussed earlier, Hospital at Home was evaluated in the actual context in which it is intended to be delivered. As shown, this affords distinct advantages including obtaining immediate knowledge as to its implementation challenges, the modifications or refinements to the intervention that need to occur or which work best in a particular setting, and the resources needed for successful implementation of this model in different hospital organizations. On the basis of the program’s evaluation in three different settings, Hospital at Home may not need a translational phase in which tweaks to the program are implemented and evaluated—rather it may now be ready for broad implementation.

In contrast, as Skills2CareR was not tested within a home care organizational context, a translational phase was necessary in which manuals, training, and intervention protocols were redesigned to fit work flow, supervisory structures, and referral mechanisms (Gitlin et al., 2010). Similarly, as GBGB was tested within only one senior center, it is unclear as to whether other senior centers with varying levels of resources and staffing could effectively implement the program. Thus, GBGB still needs to be evaluated on a much larger scale in multiple senior center settings that differ with regard to staffing, staff-member ratios, and so forth.

These cases also highlight the tension between testing interventions in rigorous yet static protocol-driven randomized trials versus other evaluative frameworks that allow for immediate program refinements such as in quality improvement (QI) evaluations or using some of the emerging design strategies mentioned in Chapter 2. Different design strategies may facilitate more rapid research responses and provide critical insights as to the contextual challenges that occur in implementation (Riley et al., 2013; Szanton, Leff, & Gitlin, 2015), thus easing the implementation tension.

The take-home message is this: Understanding contextual factors that impinge upon implementation is a worthy research goal early on in the pipeline when developing an intervention and evaluating its effectiveness.

 
Source
< Prev   CONTENTS   Source   Next >