INTERVENTION RESEARCH METHODS

Community health interventions, including those involving CBPR, are undertaken when adequate time, financial resources, and scientific expertise are available for study design, data collection, and statistical analysis of quantitative data. The individuals who deliver the health intervention and collect information for process evaluation, impact evaluation, and outcome evaluation are members of the study team. In designing and accomplishing intervention studies, attention must be given to minimizing potential sources of statistical bias (for example, selection bias, information bias, and bias due to uncontrolled confounding by extraneous variables) and to the internal and external validity of the information obtained in the study. To determine whether a health intervention is effective, one or more control groups are needed so that there is a basis for comparison. The group(s) that is randomly assigned to receive the health intervention is known as the experimental group. In situations where it is feasible to allocate research participants randomly to experimental and comparison groups, study designs involving randomized controlled trials offer considerable scientific rigor.

Since the late 1940s, the randomized controlled trial has become the gold standard in biomedical research.8 In such studies, the research participants are randomly assigned to treatment conditions after informed consent is obtained and inclusion and exclusion criteria are applied. In community health research, randomized designs are often used in CBPR studies of interventions for disease prevention or early detection. These include studies in which the intervention is at the level of individual research participants and those in which groups of people (e.g., elementary schools, neighborhoods, or military combat units) or whole communities are randomized to receive or not receive the health intervention.

In intervention studies, measurements can be obtained before the health intervention begins (i.e., pre-test), after the intervention ends (i.e., post-test), and at one or more times while the intervention is underway.4 Measurement data are commonly obtained by use of surveys that can be administered by telephone interview, in person, or via postal survey questionnaires. In some studies, additional data are obtained from clinical or hospital records (e.g., recorded information about receipt of mammograms or Papanicolaou tests).

Examples of experimental (randomized) study designs include pre-test/post- test designs with an experimental group(s) and control group(s), post-test-only designs with an experimental group(s) and control group(s), and time-series designs.4 In time-series studies, measurements are obtained both before and after a health intervention is implemented. Potential threats to the internal validity of randomized controlled studies include lack of concealment of allocation, secular changes in comparison groups, contamination of the comparison group, and inadequate sample sizes.9 Problems with external validity, for example, when results from randomized controlled trials involving carefully selected and highly motivated volunteers cannot be translated to routine practice, can also occur.

Over the last two decades, cluster randomized controlled trials (CRTs) have been increasingly used in CBPR. Cluster randomized controlled trials are also known as group randomized trials or community randomized trials. The key element of these trials is that individual observations are nested within groups or clusters, such as communities, and the intervention is applied to the cluster. The clusters of individuals are randomized to receive different interventions or to serve as comparison groups. The units of randomization are diverse, for example, clinics, communities, practice groups, schools, or worksites. The intervention is delivered to and affects groups of people rather than individuals. The individuals within the same communities or clusters are often correlated with each other, since they live in similar conditions. Due to similarities among individuals within clusters, the variance of the study is inflated, which leads to reduction in statistical efficiency.10 The degree of reduction in efficiency depends on the size of the average cluster and on intraclass correlation coefficients (ICC). An ICC can be interpreted as the standard Pearson correlation between any two responses in the same cluster or as the proportion of overall variation that can be responsible for the between-cluster variation. The variance inflation also affects the sample size requirement. The statistical power of a CRT may be substantially less than that of a similar-sized, individually randomized trial, since participants within any cluster are more likely to have similar outcomes.

As an example of a CRT that was conducted using a CBPR approach, the Faith, Activity, and Nutrition Program targeted both physical activity and healthy eating in 74 African Methodist Episcopal churches in South Carolina.11 A total of 1,257 members of the churches participated. Data were collected from 2007 to 2011. The churches were randomized to either an immediate 15-month intervention or a delayed intervention (control churches). A CBPR approach guided the development and implementation of the intervention, which consisted of full- day training and a full-day cook training. Participants also received a stipend and 15 months of mailings and technical assistance calls to support implementation of the intervention. The primary outcomes of interest were self-reported moderate-to-vigorous-intensity physical activity, self-reported fruit and vegetable consumption, and measured blood pressure. Measurements were obtained at baseline and at 15 months. An intention-to-treat analysis was performed using repeated measures analysis of variance (ANOVA), with testing of group x time interactions, controlling for church clustering and size and participant age, gender, and education.11 In addition, post hoc covariance analysis (ANCOVAs) was performed for participants with complete measurements. There was a significant intervention effect in self-reported leisure time physical activity (p = .02) but no effect for other outcomes. The ANCOVA analyses showed an intervention effect for self-reported leisure time physical activity (p = .03) and self-reported fruit and vegetable consumption (p = .03). The researchers concluded that the program showed small but significant increases in self-reported leisure time moderate-to- vigorous-intensity physical activity and that the program had potential for broad dissemination and reach.11

 
Source
< Prev   CONTENTS   Source   Next >