One reason why putting evidence into practice is difficult is due to the type of evidence that is generated. Stakeholders, ranging from providers to patients/ participants, increasingly seek high-quality evidence in decision making concerning which behavioral interventions and evidence-based strategies to use to improve health or maximize quality of life of clients or communities that are served. Yet, the way in which interventions are developed and tested do not typically address the questions that decision makers or key stakeholders pose (Tunis, Stryer, & Clancy, 2003). For example, whereas the Centers for Medicare and Medicaid Services is concerned about cost savings and reduction of harm and hospitalizations, most behavioral interventions do not examine these types of outcomes in the efficacy or even effectiveness trial phases. Thus, outcomes critical to a stakeholder may not necessarily be addressed, leaving a gap in the evidence that is produced and the evidence that would be more valued and subsequently used.

Another reason for the persistent research-to-practice gap is related to the approaches used to enhance research uptake in practice settings. Traditional approaches to enhance the uptake of research findings have focused on improving the way in which the evidence is presented. Thus, efforts have been directed at identifying, synthesizing, and then disseminating evidence in practical and accessible formats such as producing clinical practice guidelines, Cochrane reviews, and systematic and meta-analytic approaches. Professional societies and some government agencies (Agency for Health Research and Quality [AHRQ]) fund researchers to produce practice guidelines to assist clinicians, patients, and their families in making intervention decisions. Although these are helpful tools, the process by which the evidence or the intervention actually becomes adopted, operationalized, and put into practice remains a “black box” (Brownson & Jones, 2009). Implementation science has revealed that the dissemination of research findings via practice guidelines is only one potential factor influencing uptake of evidence. Other factors such as clinician experience, patient characteristics, reimbursement, and financial considerations, as well as organizational climate, have also all been shown to influence whether an intervention is adopted, implemented, and sustained.

Yet another reason for the research-to-practice gap is that it is challenging for clinicians or health and human service professionals to use evidence-based programs and guidelines. Most clinicians can barely keep pace with the rapid advances in health care knowledge. There are thousands of published papers on how to develop clinical guidelines across health care issues; yet, there are relatively few on how to actually implement such guidelines into routine practices in care settings and to do so cost-effectively (Thompson, Estabrooks, Scott-Findlay, Moore, & Wallin, 2007).

Furthermore, a consistent finding from the practice guideline literature is that the available evidence is missing important details essential for its ultimate translation into practice (Glasgow & Emmons, 2007). For example, a recent review of dementia caregiving interventions concluded that few studies provided data on the long-term effectiveness of interventions in typical care settings, the specific disease stage or etiology of people with dementia who are most likely to respond, or the outcomes of most relevance to families and decision makers (e.g., quality of life, symptom reduction, costs, reduction in hospitalizations; Maslow, 2012). This scenario is the same for most areas of health care as well; few behavioral intervention studies have examined long-term outcomes, outcomes relevant to decision makers, the cost, cost-benefit or cost-effectiveness, and who benefits most and why.

Thus, the challenge of improving clinically important outcomes such as quality of life, health, and costs of care for culturally diverse populations is, in large part, a consequence of difficulties in disseminating and implementing effective interventions. It is not, however, due to a dearth of innovative behavioral intervention research. Behavioral intervention research has yielded a multitude of proven programs. Still, the issue remains the mismatches between: the design and methodological decisions by researchers (and what funders support); and the interests, values, and needs of key stakeholders and end users; and restrictions or realities of practice and community settings. Research design and methodological decisions may initially be appropriate for the research endeavor and for developing a competitive grant proposal, but may not meet the needs and interests of potential end users and stakeholders. As a result, the “knowledge gap,” also referred to as the “implementation gap” or “quality chasm,” as illustrated in Figure 19.1, persists despite best efforts to date.

< Prev   CONTENTS   Source   Next >