The Science-Policy-Practice Gap in Health Promotion and Population Health Science

Twenty-first-century planning and evaluation of HP-DP programs requires a trans-disciplinary mixed model and team in order to have any real opportunity for success and impact. Although no one model can be expected to provide a universal answer to designing HP-DP interventions that will always work, the “Health Promotion Model” can be contrasted with the evidence supporting a “Population Health Science Model.” Future evaluations need to blend the qualitatively oriented focus of the Health Promotion Model with the quantitatively and qualitatively focused methods of the PRECEDE-PROCEED Model developed and applied by Green and colleagues, and many others through the world. Their approach stresses a planned application of a logic model, use of mixed quantitative and qualitative methods, and engagement of science, practice, and consumer stakeholders, for example, PBPR, to plan and evaluate tailored interventions for specific problems, populations, and settings.

The application of well-established principles and methods of epidemiology, biostatistics, behavioral science, health education, and multiple related population health disciplines must be considered and applied by future HP-DP evaluations. Future evaluation planning needs to apply these disciplines, but also place a broader and equal emphasis on organizational change and the participation of target groups to gain their support for a program and its evaluation. The next generation of HP-DP evaluations needs to significantly reduce the existing science-policy-practice gap. Achieving this goal requires explicit and implicit commitments to significantly improve public policy decision making. “A Framework for Mandatory Impact Evaluation to Ensure Well Informed Public Policy Decisions,” by Oxman, Bjorndal, Becerra-Posada, et al., in Lancet (January 2010) provides a cogent discussion on this topic. They define an approach, “A WHO Framework,” that governments in all countries need to consider to make the best use of finite resources.

While our knowledge base about how to plan and evaluate is comprehensive, there continues to be a very large gap between the science bases of HP-DP planning and evaluation, and the routine application of these principles by evaluation teams and program leadership. Numerous evaluation reports from individual projects, meta-evaluations of specific bodies of evidence by Cochrane and Campbell Collaborations, and reports from national evaluation units consistently confirm the poor quality of many HP-DP program evaluations. A very large number and overwhelming proportion of published evaluations or disseminated project reports for low-income countries do not meet the most basic standards of evaluation practice. Many HP-DP program evaluations in high-income countries also failed to meet standards of evaluation practice.

The following are three commentaries on the large, existing HP-DP science-to-practice gap. One represents the low-income sector, and two represent the high-income sector. When Will We Ever Learn: Improving Lives Through Impact Evaluation by the Evaluation Gap Working Group in Washington, D.C. (Savedoff, Levine, and Birdsall, Center for Global Development-CGD, 2006) presents an excellent synthesis of the state of the science and practice of program evaluation. The Evaluation Gap Working Group, supported by the Gates and Hewlett Foundations, reviewed over a two-year period the methodological rigor of impact evaluations of social programs in developing countries supported by international aid through 2005. After a comprehensive review by over 100 senior policymakers, agency staff, and evaluation specialists, the Evaluation Working Group concluded that it was rare that a methodologically sound “Impact Evaluation” was conducted. When an evaluation was conducted, the quality of it was almost always poor. This report poignantly noted:

Poor quality evaluations are misleading. No responsible MD would consider prescribing medications without properly evaluating their impact or potential side effects. Yet in social programs ... no such standard has been adopted. While it is widely recognized that withholding programs that are known to be beneficial would be unethical, the implicit corollary—that programs of unknown impact should not be widely replicated without proper evaluation—is frequently dismissed. (p. 3)

Roger Vaughen, associate editor, Statistics and Evaluation, American Journal of Public Health, in a special Issue of the journal in 2004, indicated that evaluation has many meanings, but whatever the definition, it is the business of public programs to find out what works. Consistent with the commentary and reports cited throughout this book, he noted:

Evaluation is an essential part of public health: without evaluation’s close ties to program implementation, we are left with the unsatisfactory circumstance of either wasting resources on ineffective programs or, perhaps worse, continuing public health practices that do more harm than good. The public health literature is replete with examples of well intentioned but unevaluated programs . that were continued for decades, until rigorous and appropriate evaluations revealed that the results were not as intended.” (p. 360)

The Health Committee, UK House of Commons, prepared a report entitled “Health Inequalities” (2009). It presented a synthesis of the evidence to the government, public, and scientific community about the impact of health policies and funded HP-DP programs since 2000. The UK Report dispelled the myth that poor evaluations are only conducted in low-income countries. The Report and testimony to the House of Commons by multiple senior UK academics at public sessions confirmed that very little progress had been made to determine which HP-DP programs were effective. There was strong, unanimous agreement by all contributors. Almost all evaluations had serious methodological problems, applied very poor designs, and/or failed to appreciate the complexity of evaluation. The Report, especially Chapter 3 (“Evaluation”), indicated:

“ . . . despite a ten year push to tackle health inequalities and significant government effort and investment, we still have very little evidence about what interventions actually work. This is in large part due to inadequate evaluation of the policies adopted to address the problem____(p. 28)

In addition to the technical deficiencies, the report also discussed “[t]he ethical case for evaluation.” It raised the same issue as the above Report from the Center for Global Development: addressing health inequalities with a poor evidence base. The UK report noted:

While lack of research is not a justification for inaction ... the Nuffield Council of Bioethics’ recent report on public health interventions puts forward a strong ethical case for the obligation to research interventions. Introducing unevaluated interventions into communities to risks, in much the same way as participating in trials of new drugs or surgical procedures are to exposed to risk ... the intervention may have unintended consequences. ... (p. 34)

These three discussions have explicit, current implications for many health promotion program advocates, especially the zealots and politicians in many countries who frequently lobby for initiatives and substantial resources with limited evidence of efficacy. They loudly assert that rigorous evaluation methods and randomized clinical trials (RCT) are unnecessary, and that evaluations are not a good use of resources. While an evaluation of HP-DP programs is complex and an RCT should not be automatically conducted for all programs, future evaluations and evaluators who do not apply well-established evaluation methods, especially in many countries of greatest need, need to address the ethical implications and gross inefficiencies of their activity.

The Evaluation Working Group Report (2006), the House of Commons Health Inequalities Report (2009), and many other commentaries and Reports cited throughout this text represent important reference guides to academic programs, evaluation teams, and organizational leadership. These sources provide an enduring insight and comprehensive discussion about the array of conceptual and methodological issues that future evaluations and evaluators must consider. The global literature clearly tells us in 2015 how to conduct rigorous qualitative, process, impact, outcome, and cost evaluations of population health programs. We know how to plan and evaluate, how to potentially improve population health, and how to improve the science, policy, and practice base for major chronic and infectious diseases in any country or region.

Accordingly, both the leadership in government and NGOs, especially units in international development agencies that fund programs and so-called Offices of Evaluation that have the responsibility of evaluating these initiatives, must decide to accept the responsibility for failing to plan and conduct valid assessments of program impact. In the last decade alone, hundreds of agencies have spent and wasted billions of dollars on hundreds of programs that failed to be implemented and evaluated. All too frequently, little or no data were collected on salient process, cost, impact, or outcome rates, or vast amounts of data were collected, only to be ignored.

 
Source
< Prev   CONTENTS   Source   Next >