Structure of Evaluations
Reporting the evaluation of an intervention must contain: sufficient details about the study design and intervention to enable an understanding of potential study biases; support replication, and translation of findings into practice; and facilitate cross-study comparisons (Curry, Grossman, Whitlock, & Cantu, 2014; Glasziou, Chalmers, Green, & Michie, 2014; Grant, Mayo-Wilson, Melendez-Torres, & Montgomery, 2013; Mayo-Wilson et al., 2013). The structure of a report and the details that should be included vary considerably across journals.
In response to the persistent, inconsistent, and inadequate detailing of interventions in scientific journals across all areas of inquiry, reporting guidelines have emerged over the past two decades. The purpose of such guidelines is to improve transparency and quality and assure comprehensiveness of reports. There is no comprehensive and appropriate published guideline for behavioral intervention research. In fact, Grant and colleagues (2013) identified 19 different reporting guidelines that involve a total of 147 reporting standards relevant to behavioral intervention research. Evidence suggests that using reporting guidelines enhances the quality and consistency of reporting and can potentially advance a field of inquiry (Moher, Jones, Lepage, & CONSORT Group, 2001). Unfortunately, however, many journals currently do not require or endorse particular guidelines, and researchers do not systematically abide by them. Poor adherence to published guidelines continues to plague publications of behavioral interventions. Samaan and colleagues (2013) found in a review of 50 studies that 86% did not adequately adhere to reporting guidelines.
Nevertheless, guidelines are important and provide a roadmap for reporting a study. As such, various checklists are being advanced to address specific forms of evaluative designs and categories of behavioral interventions. Examples include the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) to guide reporting of nonrandomized studies that include evaluation of an intervention (Des Jarlais, Lyles, Crepaz, & TREND Group, 2004), and the Complex Interventions in Healthcare (CReDECI) for reporting complex behavioral interventions (Mohler, Bartoszek, Kopke, & Meyer, 2012).
One guideline in particular, the Consolidated Standards of Reporting Trials (CONSORT; 2010), is the most consistently used. It has been adopted by over 100 major journals, most of which are biomedical. The CONSORT checklist consists of 25 items for reporting a randomized trial of a nonpharmacological intervention including, for example, but not limited to, randomization procedures, blinding, statistical methods, and harms. Although its limitations are discussed later, we recommend using the CONSORT (see www.consort-statement.org/ for all items) checklist to help structure manuscripts for journals, even for those journals that do not require it. The checklist can also help guide the preparation of a grant application that proposes to evaluate a behavioral intervention (see Chapter 23 on grant writing for intervention research). Our purpose in this chapter is not to provide a comprehensive review and evaluation of the relative merits of existing guidelines, but to highlight their necessity and utility. As the most widely used is the CONSORT checklist, this is a good place to start when seeking guidance for structuring a main trial outcome paper.