Interventions to prevent or reduce adolescent dating violence: methodological considerations in randomized-controlled trials

Ernest N. Jouriles, Kelli S. Sargent, Alison Krauss and Renee McDonald

Introduction

Adolescent dating violence (ADV) refers to acts of physical, sexual, psychological, or emotional violence that occur between adolescents and a current or former romantic partner (Centers for Disease Control and Prevention, 2019). Such acts include kicks or slaps, threats, insults or put-downs, sexual coercion, or stalking. Among dating adolescents in the United States (US), approximately 21% of females and 10% of males report experiencing physical or sexual ADV over the course of a year alone (Vagi, Olsen, Basile, & Vivolo-Kantor, 2015). Psychological and emotional dating violence estimates are much higher, with up to 77% of adolescents reporting verbal or emotional abuse perpetration in the past year (Niolon et ah, 2015). Experiencing ADV is linked to a range of adjustment difficulties, including depressive symptoms (Ackard, Eisen- berg, & Neumark-Sztainer, 2007; Exner-Cortens, Eckenrode, & Rothman, 2013), suicidal ideation (Nahapetyan, Orpinas, Song, & Holland, 2014), substance use (Ackard et al., 2007; Foshee, Reyes, Gottfredson, Chang, & Ennett, 2013), and risk for revictimization (Jouriles, Choi, Rancher, & Temple, 2017).

Developmentally, dating has been conceptualized as a novel “task” of adolescence. Through dating, youth build skills and form scripts for navigating romantic relationships (Collins, Welsh, & Furman, 2009). Almost 50% of US adolescents report a current dating relationship by age 15 (Carver, Joyner, & Udry, 2003), and the average age of first sexual intercourse in the US quickly follows at 16.5 years (Vasilenko, Kugler, & Rice, 2016). Thus, adolescence offers youth opportunities to form and crystallize romantic relationship scripts. Unfortunately, sometimes these scripts include aggression (Jouriles, McDonald, Mueller, & Grych, 2012). Given the scope and consequences of ADV, early to middle adolescence represents an ideal time period to intervene to attempt to prevent or reduce such violence.

Interventions to prevent or reduce adolescent dating violence

Reviews of the literature suggest that over 60 programs have been developed and tested to address ADV (De La Rue, Polanin, Espelage, & Pigott, 2017; Edwards & Hinsz, 2014; Fell- meth, Heffernan, Nurse, Habibula, & Sethi, 2013; Lundgren & Amin, 2015; Ting, 2009). These programs vary in the populations they target (e.g., potential perpetrators, potential victims, witnesses or bystanders of ADV), the attitudes and behaviors they address (e.g., rape myths, bystander behaviors), and methods by which they are delivered (e.g., small group discussions, classroom lectures). However, they share the broader goal of reducing violence perpetration and victimization among adolescents.

According to reviews of the literature, ADV programs tend to have favorable effects on attitudes and knowledge pertaining to violence, yet evidence that they reduce violence perpetration and victimization is lacking. For example, a meta-analysis of school-based ADV interventions (De La Rue et al., 2017) documented favorable effects on rape myth acceptance, ADV attitudes, knowledge, and healthy conflict skills at posttest across 23 studies that utilized a control group. There were no differences, however, between intervention and control groups on ADV perpetration or victimization. In short, although ADV intervention programs show promising effects on important constructs, more work is needed to achieve and document observable reductions in actual violence and victimization. Thus, researchers will likely attempt to build upon prior ADV intervention efforts, and continue to design studies to prevent or reduce ADV.

Current review

Prior reviews of evaluation research on ADV interventions (e.g., De La Rue et al., 2017; Edwards & Hinsz, 2014; Fellmeth et al., 2013; Lundgren & Amin, 2015; Ting, 2009) focus on whether interventions designed to address ADV influence dependent variables of interest (e.g., attitudes about ADV, ADV perpetration). Yet, there is tremendous variability in the methodological rigor of these evaluations. Rigor is an important consideration in intervention studies, as diminished rigor can contribute to biased results (Higgins, Churchill, Tovey, Lesserson, & Chandler, 2011; Kazdin, 2017), making it difficult to infer that the intervention being evaluated caused the observed effects. Lack of rigor may also exaggerate the actual effects of an intervention and obstructs one’s ability to interpret promising findings (Kazdin, 2017); less rigorous studies have shown larger effect sizes than studies with greater methodological rigor (Cheung & Slavin, 2016; Wood et al., 2008). Insufficient attention to certain methodological considerations, such as sample size and measurement of key study variables (e.g., violence perpetration), can also impede the ability to detect intervention effects.

There are many challenges to conducting methodologically rigorous research evaluating effects of ADV intervention programs. Some of these are common to evaluations of interventions in general, such as the ability to recruit an adequately sized sample and to track and retain participants over time. Other challenges arise from the combination of the sensitive nature of the subject matter — especially in a population of minors — and the challenges of engaging multiple stakeholders in conducting the research, including adolescents, parents, personnel at the settings where the intervention programs are delivered, and internal review boards.

Our primary aim in this chapter is to help the field move toward greater sophistication in both the conduct and reporting of methods in randomized controlled trials (RCTs) evaluating ADV intervention programs. An RCT is a study in which units (individuals, classrooms, schools) are allocated at random, or by chance, either to receive the intervention that is being evaluated or to receive a control or comparison intervention. RCTs are often considered the

Adolescent dating violence interventions

gold standard design for making conclusions about effects of an intervention program in medicine, psychology, and education (Schulz, Altman, & Moher, 2010).

To organize information and describe the importance of certain features of RCTs, we used the Consolidated Standards of Reporting Trials or more simply CONSORT statements (Schulz et al., 2010) and the Clinical Trials Assessment Measure (СТАМ) (Tarrier & Wykes, 2004) as guides. The CONSORT statements emanated from eff orts to improve the reporting of RCTs, and were based, in part, on methodological research that can influence outcomes. The СТАМ is a tool for evaluating the methodological quality of clinical interventions (e.g., treatments of a psychological disorder). Both the CONSORT statements and the СТАМ include items to help evaluate studies.

We began by systematically reviewing the literature to identify randomized controlled trials of ADV intervention programs. After identifying relevant studies, we reviewed them for the purpose of providing examples of what we consider exemplary methodological practices for designing, conducting, and reporting ADV research in the domains of: (1) sample description, (2) description of treatment and control conditions, (3) allocation of participants to treatment and control conditions, (4) assessment of outcomes, (5) participant retention and missing data, (6) data analysis, and (7) considerations prior to initiating an RCT.

Literature search

MEDLINE, ERIC, PsycINFO, PsycArticles databases were searched to identify published articles and gray literature (e.g., empirical theses and dissertations) pertaining to ADV interventions through August 2017. The following search terms were entered in factorial fashion: Term I) adolescent, teen, youth, student, high school, middle school; Term 2) dating violence, dating abuse, relationship violence, relationship abuse, partner violence, partner abuse, intimate partner violence, sexual violence, physical violence, psychological violence, emotional violence, verbal abuse, date rape, sexual coercion, revictimization, rape, sexual assault, victimization, perpetration, relationship aggression, partner aggression, courtship violence, courtship aggression, courtship abuse, dating aggression, sexual aggression, verbal aggression; Term 3) intervention, program, education, training, intervention program, curriculum, prevention.

An initial screening yielded 12,835 published articles and dissertations (referred to collectively as articles), after accounting for exact duplicates. One primary rater and two additional independent raters screened titles and abstracts of all retrieved articles. Each independent rater overlapped at least 10% with the primary rater. Discrepancies were discussed and reconciled by consensus. To be included in the review, articles had to be 1) published in English, and report on a study that: 2) empirically evaluated an intervention that had been administered (not just a program description or analysis of baseline data), 3) utilized an RCT design, 4) included measures of ADV or ADV-related outcomes, such as knowledge, attitudes, efficacy, and/or bystander behavior related to ADV, and 5) sampled adolescents (youth 10—19 years old). Studies that used exclusively elementary or college student samples were excluded.

Figure 44.1 details the process of article exclusion. Of the 12,835 identified articles, 39 met inclusion criteria. Among these, 37 unique intervention programs were evaluated. Table 44.1 lists the articles that were included, and summarizes aspects of the programs evaluated (e.g., number of sessions, duration, setting of the evaluation). Most of the articles report the evaluation of a single ADV intervention using a two-group RCT design (i.e., an intervention group and a control group), but three articles evaluated two interventions separately against a control group (DePrince, Chu, Labus, Shirk, & Potter, 2015; Taylor, Stein, & Burden, 2010; Taylor, Stein, Mumtord, & Woods, 2013), and are listed twice in Table 44.1.

CONSORT of article identification, screening, and included studies

Figure 44.1 CONSORT of article identification, screening, and included studies

Discussion and analysis Practices within methodological domains

Sample description

Investigators commonly reported the inclusion and exclusion criteria used to determine who was eligible to participate in their studies. This is especially important in ADV research, because some programs target specific groups of adolescents, such as males or females only (e.g., Coaching Boys into Men, My Voice My Choice). Investigators also routinely provided routine descriptive information on their samples, including the sample size and demographics such as sex, age, race, and ethnicity.

ADV programs are often administered and evaluated in community settings (e.g., schools, juvenile correction facilities, inpatient treatment centers), and features of these settings may influence aspects of the program and its evaluation. Thus, it is desirable to go beyond basic sample description and provide information about the setting as well. For school-based studies, this can include socioeconomic data (e.g., percentage of students qualifying for free or reduced lunch; Sargent et al., 2017) and general academic proficiency and truancy rates (Taylor et al., 2013). Fay and Medway (2006) went a step further, situating the school within its broader community context:

The educational setting of this project was the only high school located in a rural, primarily agricultural, town of 6600 residents within South Carolina. The town was selected because of high-risk factors of residents: 21% of the population had family

Article authors (year)

Program

Sessions

Duration

(min)

Approach

Presenter

Setting

Format

Study n

% Male

Mean age

Outcomes

Baiocchi et al. (2017)

1MPower

6

720

Selected

School staff

School

Group

5,686

0

V

Brown et al. (2012)

PR :EPA Re

Universal

Online/Indirect

School

Group

505

48.9

13.5

A, E

Coker et al. (2017)

Green Dot

2

350

Universal

Community

professional

School

Group

16,509

54.4

P, V

Connolly et al.

(2015)

RISE

2

90

Universal

Community

professional

School

Group

509

48.6

12.4

А, К, V

Cunningham et al. (2013)

SafERteens

i

35

Indicated

Community

professional;

online/

indirect

Hospital

Group

397

35.5

16.8

V

DePrince et al. (2015)

Social Learning/ Feminist;

12

1080

Selected

Research staff

Community

Group

180

0

15.9

V

DePrince et al. (2015)

Risk reduction & Executive Functioning

12

1080

Selected

Research staff

Community

Group

180

0

15.9

V

Espelage, Low, Polanin, and Brown (2013)

Second Step

15

750

Universal

School staff

School

Group

3,616

52

11.2

P. V

Espelage, Low, Polanin, and Brown (2015)

Second Step

28

1400

Universal

School staff

School

Group

3,658

52

11

P, V

Fay and Medway (2006)

2

120

Universal

Research staff

School

Group

154

43.5

15.5

A

Foshee et al. (2004)

Safe Dates

10 + Booster

450+

Universal

School staff

School

Group

957

41.5

15.5

P, V

Foshee et al. (2005)

Safe Dates

10

450

Universal

School staff

School

Group

1,566

46.8

13.9

A, P, V

Foshee et al. (2012)

Families for Safe Dates

10

450

Universal

Online/Indirect

Community

Parent-

adol

324

42

14

A, P, V

Article authors (year)

Program

Sessions

Duration

(min)

Approach

Presenter

Setting

Format

Study n

% Male

Mean age

Outcomes

Foshee et al. (2015)

Moms and Teens for Safe Dates

6

Universal

Online/Indirect

Community

Parent-

adol

409

25.9

13.6

P, V

Gonzalez-Guarda et al. (2015)

Youth: Together Against Dating Violence

6

540

Selected

Research staff

School

Parent-

adol

82

44

14.3

P. V

Jay cox et al. (2006)

Ending Violence

3

180

Universal

Community

professional

School

Group

2,540

48.3

14.4

А, К, P, V

Joppa, Rizzo, Nieves, and Brown (2016)

Katie Brown

Educational Program

5

300

Universal

Community

professional

School

Group

225

46

15.9

А, К, P, V

Langhinrichsen- Rohling and

Turner (2012)

Building a Lasting Lore

360

Selected

Community

professional

Community

Group

72

0

17.2

P, V

Levesque, Johnson, Welch, Prochaska, and Paiva (2016)

Teen Choices

3

90

Universal

Online/Indirect

School

Individual

3,901

46.5

15.4

A, P, V

Macgowan (1997)

5

300

Universal

School staff

School

Group

440

56.1

12.6

A, К

Mathews et al. (2016)

PREPARE

21

1890

Selected

Community

professional

School

Group

3,451

38.7

13

P, V

McArthur (2010)

Young Parenthood Program

10

Selected

Community

professional

Community

Individual

46

50

V

Miller et al. (2012)

Coaching Boys into Men

11

165

Selected

School staff

School

Group

2,006

100

А, В, К,P

Miller et al. (2013)

Coaching Boys into Men

11

165

Selected

School staff

School

Group

1,513

100

А, В, К, P

Miller et al. (2015)

SHARP

1

Universal

School staff

School

Individual

1,011

23.7

E, В, К, V

Pacifici, Stoolmiller, and Nelson (2001)

Dating and Sexual Responsibility

4

240

Universal

School staff

School

Group

458

48

15.8

A

Peskin et al. (2014)

Its Your Game . . . Keep it Real

24

1080

Selected

School staff

School

Group

766

40

13

P, V

Article authors (year)

Program

Sessions

Duration

(min)

Approach

Presenter

Setting

Format

Study n

% Male

Mean age

Outcomes

Polanin and Espelage (2015)

Second Step

100

Universal

School staff

School

Group

3,616

47.2

11.2

P. v

Roberts (2009)

Expect Respect

4

186

Universal

Community

professional

School

Group

332

49

A, P, V

Rothman, Stuart, Heeren, Paruk, and Bair-Merrit (n.d.)

Real Talk

1

45

1 ndicated

Community

professional

Hospital

Individual

172

14

17

P, V

Rothman and Wang (2016)

Real Talk

1

45

Indicated

Community

professional

Hospital

Individual

27

26

17

P, V

Rowe, Jouriles, and McDonald (2015)

My Voice, My Choice

1

90

Selected

Research staff

School

Group

83

0

15.6

V

Salazar and Cook (2006)

Men Stopping Violence

5

390

Indicated

Community

professional

Juvenile

correctional

Group

47

100

14.9

A, К

Sargent, Jouriles, Rosenfield, and McDonald (2017)

TikeCARE

i

26

Universal

Online/Indirect

School

Group

1,295

47.5

15.3

В. E

Taylor et al. (2010)

Interaction

5

200

Universal

Community

professional

School

Group

123

48

12

A, K, P, V

Taylor et al. (2010)

Law and justice

5

200

Universal

Community

professional

School

Group

123

48

12

A, K, P, V

Taylor et al. (2013)

Shifting Boundaries: Classroom

6

240

Universal

School staff

School

Group

2,655

46.5

11.8

P, V

Taylor et al. (2013)

Shifting Boundaries: Classroom + Building

6

240

Universal

Online/Indirect, school staff

School

Group

2,655

46.5

11.8

P, V

van Lieshout et al. (2016)

Make a Move

8

720

Selected

Research staff

Community

Group

177

100

14.8

A. E

Wolfe et al. (2003)

Youth Relationships Project

2160

Selected

Community

professional

Community

Group

158

47.2

14.5

P, V

Wolfe et al. (2009)

fourth R

21

225

Universal

School staff

School

Group

1,722

48

15.2

V

Yom and Eun (2005)

CD-ROM Educational Program

18

60

Universal

Online/Indirect

School

Individual

79

100

11.5

A, К

Note: A — Attitudes, В — Bystander, E — Efficacy, К = Knowledge, P — Perpetration, V = Victimization incomes below the poverty rate and the town’s incidence of reported rape was 62% higher than national averages.

(p. 225)

Such information provides readers valuable context for thinking about study results. For studies recruiting through larger agency networks, setting descriptions like the one provided by Wolfe and colleagues (2003) are helpful:

Youths from Child Protective Services (CPS) agencies were targeted for the study because their histories of maltreatment. . . . Seven CPS agencies participated in the study, including urban, rural, and semirural jurisdictions (whereas over 90% of the sample came from CPS agencies and were under a protection, supervision, or wardship order, we also included a small subset of maltreated youths attending a special needs school in the community). Social workers were provided with information on the content and requirements of the study; each agency had a volunteer coordinator to assist in identifying potential participants.

(p. 281)

Many articles reporting on evaluations of ADV intervention programs do not include an explanation or rationale for the size of the sample recruited for the study. Depending on the program being evaluated, sample size could include the number of participants, classrooms, or schools, or some combination thereof, depending on the randomization and nesting procedure. In determining sample size an important consideration is power, or the ability, using statistical tests, to detect an intervention effect when one exists (Field, 2018). Power is determined by a number of factors, including sample size, the size of the anticipated between-group difference (or effect size), and the statistical test used for data analysis. For instance, a larger sample is needed to detect a small effect size, compared to a large effect size. A study that is “underpowered” may lack a large enough sample to detect meaningful small effects. Reporting the reasoning tor the sample size, and the extent to which the sample is sufficiently powered, gives readers a fuller understanding of the study results (or lack thereof).

An RCT of an ADV intervention should ideally be designed with sufficient power to detect small between-group effects, it such differences exist. Small intervention effects can be extremely meaningful for an ADV intervention program (Sargent et al., 2017). This is especially true for ADV intervention programs that are more universal in nature and meant to be delivered to large groups of teens, such as programs designed tor dissemination to an entire school or school district. As Sargent and colleagues (2017) note, small effect sizes across an entire school can conceivably result in substantial reductions in ADV:

[T]he average number of situations in which helpful bystander behavior was reported by students at baseline was 4.39. In a high school of over 1000 students, this translates to at least 4390 helpful bystander behaviors over a 3-month period. The average difference in helpful bystander behavior between students who viewed TakeCARE and those in the control condition was 0.56 situations per student at follow-up, translating to an additional 560 helpful bystander behaviors over the follow-up period. Such an increase could make a considerable difference in reducing school victimization rates and could help contribute to changing a school’s culture regarding tolerance of relationship violence.

Under ideal circumstances, the sample size for an ADV intervention program evaluation would be determined in advance ot the evaluation, and the method for determining the study’s power (e.g., a power analysis) is described in the evaluation report. However, a post hoc justification ot sample size is better than none at all. Coker and colleagues (2017) provide an excellent example of sample size justification:

The sample size for the primary analysis was determined a priori based on number ot regional rape crisis centers (n = 13) and the design in which two demographically similar public high schools were identified and randomized in each of the 13 service regions. . . . For secondary analyses using individual-level data within a single year, power calculations were provided using Stata, version 11 (sampsi), assuming 500 students participating at each school within a year, accounting for clustering of students within schools (intraclass correlation of0.005), and a two-sided significance level of 0.05. Greater than 80% power was anticipated to test for a 50% reduction in physically forced sex, relative to 5% rate in control condition (Appendices, available online).

(p. 568)

Investigators should also describe how the participants were recruited, since intervention effects may vary by recruitment method (Kazdin, 2017). For example, an ADV intervention found to be effective for students who volunteered to participate after seeing a study advertisement may have different effects (or none) on other groups of students. A clear presentation ot how participants were recruited can help readers make judgments about the applicability ot the intervention program to other samples. Again, in Coker and colleagues’ (2017) evaluation ot the Green Dot bystander intervention program, the investigators clearly identify the program as a universal program targeting all eligible students, and they specify that all students were included unless the student or parent opted out of the study protocol (p. 568).

Description of intervention and control conditions

There are several key things to consider when describing ADV intervention programs. First, the description should be thorough enough to allow the reader to have a good conceptual understanding of the program’s design and objectives. Such descriptions should include key content and characteristics (e.g., number of sessions, service provider, format) of the intervention. One way to do this is to provide a link to a website with all program materials, as was done by Miller and colleagues (2012) in their evaluation ot Coaching Boys into Men. Several researchers have also provided detailed tables of session-by-session intervention content (e.g., Gonzalez-Guarda, Guerra, Cummings, Pino, & Becerra, 2015; Jaycox et ah, 2006; van Lieshout, Mevissen, van Breukelen, Jonker, & Ruiter, 2016; Wolfe et ah, 2009).

Second, evaluation reports should also include a detailed description of the assessment ot treatment fidelity — how investigators ensured that the ADV intervention program was delivered as it was designed to be delivered. Successful implementation ot an ADV intervention program involves a complex set of processes and events that untold over time and across the individuals who provide and participate in the intervention. Documenting that the program was delivered as it was designed to be helps support conclusions about its effectiveness. On the other hand, poor treatment fidelity opens the door to alternative explanations for study results. If null results occur, a fidelity check indicating that the program was not implemented as intended can help reduce the likelihood ot abandonment of a potentially useful program. Jaycox and colleagues

(2006) provide an excellent example of a fidelity check for their Ending Violence ADV intervention program:

We assessed fidelity to the Ending Violence curriculum via two mechanisms. A single expert observed 10% of classes and rated the content and quality ot the presentation. Observations were selected to obtain a variety ot implementers, schools, and sessions. Delivery style, overall presentation, and interaction with participants were rated on five-point scales (poor to excellent); average ratings tell between good and very good, with one exception (the average rating of use of visual aids tell between fair and good). On average, 69% ot curriculum elements were covered completely,

  • 26% covered partially, and 5% of elements not covered. Implementers rated their own presentation for the amount of content they covered and class compliance for 153 of 165 class sessions (92%). Implementers nearly always reported covering at least 90% of the material, with only five sessions reported in the range of 76%—90%. Implementers rated the quality ot the program delivered as good to excellent in all but three sessions (which were rated as fair). Classes were rated as moderately to extremely cooperative or compliant in all but 11 sessions, which were rated as “a little bit” cooperative or compliant. Classes were rated as moderately to extremely engaged and interested in all but six sessions, which were rated as “a little bit” engaged and interested.
  • (p. 697)

Additionally, duration of the intervention, including the number of sessions and how long they last; format (e.g., group, individual, community, online vs. in person); a description of the individuals who administered the intervention, and the training and oversight they received in delivering the intervention is helpful for readers to judge whether the intervention program is feasible for administration in their desired setting. If journal restrictions (e.g., page requirements) prohibit this level of detail, such information could be made available through supplemental materials — see Taylor, Stein, Woods, & Mumford’s (2011) publicly available data report. We summarize these descriptors tor the RCTs we identified in Table 44.1.

Investigators should also describe the control condition thoroughly, and explain what the control condition is designed to control. In the studies we reviewed, investigators deployed a wide range of control conditions, including wait-list controls (i.e., no treatment), treatment- as-usual, and active comparison interventions. Control conditions that control tor non-specific aspects of the ADV intervention (e.g., therapist/service-provider contact) are particularly rigorous. These help provide stronger evidence that any observed ADV intervention effects are attributable to the ADV intervention.

Allocation of participants to treatment and control conditions

Random assignment to treatment and control conditions serves multiple purposes in the evaluation of ADV program effects. Perhaps the most important is to increase the likelihood that the intervention and control groups are equivalent on key study variables at the outset (Kleijnen, Gotzshe, Kunz, Oxman, & Chalmers, 1997). Key variables include such things as demographic characteristics, the outcomes of interest (e.g., dating violence perpetration), and variables targeted by the intervention or potentially linked to the outcome of interest. The practice of random assignment involves the generation ot a random allocation sequence, so that the pattern ot unit (individuals, classrooms, schools) assignment to the treatment and control conditions is not predictable. Random assignment procedures range from simple to quite complex, and investigators should clearly report the methods used to generate the random allocation sequence, as well as allocation concealment procedures. To provide an example, Baiocchi and colleagues (2017) detail the specific randomization algorithm used in their supplemental material:

The study’s statistician used a nonbipartite matching algorithm to find optimal matched pairs (4). The characteristics and assignments of the matched-pairs design are summarized in Table 2. A binary vector of length 16, representing the 16 matched pairs, was created using the sample function in R. A 1 (intervention) or a 0 (SOC) was sampled for each of the 16 entries in the vector, with the probability ot sampling a 1 being 'A. This approach ensures that each school had an equal probability of being assigned to the intervention.

(ESM 1 p. 2)

Random assignment to condition increases the likelihood of equivalent groups, but it does not guarantee it. Thus, investigators should ideally report each group s characteristics at baseline, as was done by DePrince and colleagues (2015):

[W]e evaluated equivalence of the adolescents in the three groups (RD/EF, SL/F, and no-treatment groups) in terms of a host of demographic (e.g., age, ethnicity, placement type, school level) and individual difference (e.g., violence exposure, previous healthy relationship classes) factors. The only significant group difference noted related to witnessing domestic violence: 85% ot youth in the SL/F group reported witnessing domestic violence relative to 55% in the RD/EF and 67% in the no-treatment group (f = 14.22; p = .001).

(p. S36)

Assessment of outcome

Existing evaluations determine program efficacy across a broad range of ADV constructs, including self-reported behaviors (victimization, perpetration, bystander behaviors), attitudes about dating aggression, knowledge of ADV, and self-efficacy to manage relationship conflict in healthy or prosocial ways (see Table 44.1 for these assessments by study). Although ADV interventions broadly aim to reduce the occurrence ot ADV victimization and perpetration, many also seek to influence purported precursors, maintaining factors or other variables thought to contribute to the onset and continuation of relationship violence.

An overall strength ot the studies included in this review is the commitment to follow-up assessments. Most studies included assessments beyond an immediate posttest (e.g., Cunningham et ah, 2013; DePrince et ah, 2015), and several describe repeated assessments up to tour years post-intervention (Coker et ah, 2017; Foshee et ah, 2005; Gardner & Boellaard, 2007). An important consideration in ADV intervention research is the timing of assessments. The assessment scheduling might be determined, in part, by what is feasible in the setting where the intervention is being tested. Scientific considerations include: (1) how quickly the intervention is expected to show effects and (2) the extent to which some constructs and behaviors are likely to change more quickly than others. Summarizing the rationale for the timing of the assessments would be beneficial.

Most investigators routinely describe their measures well, including the number of items, sample items, and some index of the reliability of the measures in the study sample. Ideally, investigators also provide evidence of convergent and/or criterion validity of measures as documented in previous literature with relevant samples. An excellent illustration of this can be found in Foshee and colleagues (2015), in which a standardized violence outcome measure is described:

The perpetration of and the victimization from psychological dating abuse were assessed with items from the Safe Dates Psychological Dating Abuse Scales (Centers tor Disease Control and Prevention 2006; Foshee 1996). The Safe Dates Dating Abuse Scales (for assessing psychological and physical dating abuse) have high internal consistency (a = .94) and are among the most widely used scales for assessing dating violence among adolescents (Centers tor Disease Control and Prevention 2006). The scales have been found to correlate with other constructs as expected and produce prevalence estimates comparable to those produced with other dating abuse scales (Foshee et al., 2001). To assess perpetration, the adolescent was asked how many times he/she had ever (1) insulted a date in front of others, (2) not let a date do things with other people, (3) threatened to hurt a date, (4) hurt a date s feelings on purpose, and (5) said mean things to a date. Parallel questions were asked to assess victimization by asking adolescents how many times these things had been done to them. Responses were summed to create the perpetration of psychological dating abuse and the victimization from psychological dating abuse composite variables.

(pp. 1000-1001)

Participant retention and missing data

Retaining participants in the study is vital to a rigorous RCT, because it helps ensure the internal validity of the trial’s experimental design. Unfortunately, many published RCTs evaluating ADV programs do not include much description of the strategies employed to retain participants. Descriptions of participant retention strategies, particularly those that are successful, can help advance knowledge on best practices for participant retention.

Strategies reported in the literature include monetary compensation (e.g., Foshee et al., 2015; Gonzalez-Guarda et al., 2015), assistance with transportation costs (e.g., DePrince et al., 2015), refreshments and small gifts at intervention sessions, and “loyalty cards” redeemable for gift cards after attending a certain number ot sessions (Mathews et al., 2016). Similarly, Langhinrichsen-Rohling and Turner (2012) articulated clear efforts to retain participants in their intervention:

Attendance incentives included: facilitating transportation to each session, weekly check-in/reminder calls from project staff, in-session snacks and drinks, optional color printed take-home copies of session materials, on-site childcare, and small incentives tor an on-site store that was stocked with essential childcare items including diapers.

(p. 387)

Missing data is inevitable in most RCTs, particularly those involving adolescents as participants. Missing data can occur for a number of reasons. As implied earlier, one form of missing data occurs with attrition of study participants (i.e., loss ot all participant data beyond a given time point). It is often useful to examine and report on differences between participants with and without complete data (using baseline data), especially on primary variables of interest. Another form of missing data occurs when a participant provides incomplete data on certain measures, such as by not answering some items on questionnaires (loss of only some participant data at a time point).

There are several approaches to handling missing data (Baraldi & Enders, 2010). Two traditional approaches include restricting analyses only to cases with complete data (known as complete-case analysis; see Foshee et al., 1998, tor example), or using different subsamples of complete data based on the analysis presented (known as pairwise deletion; see Fay & Medway, 2006). Such techniques should be used with caution, as they can produce biased estimates when data missingness is related to the variables ot interest (as is often the case). Another common technique for handling missing data includes imputation methods. Such strategies infer values ot missing data from available data and allow the researcher to produce a complete dataset. Single imputation (such as mean imputation or last observation carried forward; Coker et al., 2017) and multiple imputation methods (such as maximum likelihood estimation and multiple imputation; Foshee et al., 2015) are frequently used to account for bias due to missing data. Others employ data analysis techniques (regardless of number of completed assessments or questionnaire completeness) rather than imputing missing values (discussed next in the Data analysis section).

There are pros and cons to each of the various approaches to handling missing data, and it is likely that multiple methods could be appropriate tor any given study. However, different methods for handling missing data can yield different findings. Therefore, researchers should clearly specify the method used for handling missing data as well as their reasoning for choosing that method.

Data analysis

As with any study, appropriate statistical analyses — those that adequately address the research question and hypotheses — are needed for determining the effects of ADV intervention programs. For instance, an RCT that purports to examine differences between an intervention and control condition should utilize a between-groups design in analyzing data. Appropriate analytic strategies are of specific concern tor evaluating ADV interventions because increasingly, researchers are using sophisticated research designs, such as cluster randomization across multiple classrooms and schools. Such designs necessitate analyses that account for dependency across subjects within the same cluster. For instance, students within the same school are likely to be similar to one another due to shared characteristics of the school. Such similarities could artificially inflate intervention effects if the dependency ot students within schools is not appropriately modeled in the analyses. There are several ways in which dependent data can be appropriately modeled in statistical analyses, including the use of nesting procedures, as Taylor and colleagues (2013) detail:

Given the nested nature of our data, variables at the student level, class level, and building level may be correlated. Because our substantive interest is in the individual outcomes, and because of the need to adjust for correlated standard errors, we do not present simple means for the treatment and control groups. We included a robust variance estimate to adjust tor within-cluster correlation called the Huber/White/ sandwich estimate of variance (Froot, 1989; Huber, 1967; Rogers, 1993; White, 1980; Williams, 2000), the vce (cluster clustvar) option in Stata 8.0. For our count data, we used a negative binomial regression with a robust variance estimate. We used logistic regression with a robust variance estimate tor our prevalence outcome variables.

(p. 68)

A rigorous method tor handling data in a RCT is an intent-to-treat analysis, in which all participants are included and modeled within the condition to which they were originally randomized, regardless of whether they participated in their assigned condition or completed all follow up assessments. That is, participants assigned to an intervention condition are analyzed with the intervention group, even if they failed to complete the intervention. Additionally, participant data from a baseline assessment is retained in analyses even it the participant was subsequently lost to attrition.

There are several statistical approaches appropriate for intent-to-treat designs; for instance, Connolly and colleagues (2015) utilize a multilevel linear model that includes all available data for participants, regardless ot whether all follow-up assessments were completed:

Multilevel linear models (MLMs; Raudenbush & Bryk, 2002; Snijders & Bosker, 1999) using Full Information Maximum Likelihood (FIML) were fitted to the data to determine the program effects on knowledge, attitudes, victimization, and emotional school adjustment. ... A second important characteristic of MLM using FIML is that it uses all the available participant information for the analysis, even those with missing data at one of the assessments. This use ot all data when conducting inference tests on groups with small sample size makes the inference more efficient and also increases accuracy ot the standard error (Laird, 1988).

(p. 415)

Considerations prior to initiating an RCT

Background information on the investigators’ efforts to engage and collaborate with community stakeholders and to develop/adapt an intervention that is acceptable to the unique needs of a given community can be very informative. To illustrate, we present several of the steps reported by Rothman and Wang (2016) in the development ot their program, Real Talk:

The Real Talk intervention development process used the Intervention Mapping protocol method, which entails six steps: (a) needs assessment, including establishing a participatory planning group; (b) identifying behavior change goals; (c) selecting a behavior change theory to guide the development of the intervention; (d) creating the program and preparing materials; (e) implementing the intervention; and (f) evaluating and refining the intervention based on results (Bartholomew, Parcel, & Kok, 1998). In addition to these steps, the intervention developer gathered background information on brief intervention and motivational interviewing by reading about them and participating in training delivered by national experts (BNI ART Institute, 2015), interviewing the staff of an existing alcohol brief intervention, using a Delphi process to vet the ADA intervention script with a panel of experts, roleplaying the intervention for a focus group of youth from the target population, and revising the intervention based on youth and expert feedback before pilot-testing . . . intervention materials were iteratively created over a year-long period. A first-draft copy of the intervention manual, intervention handouts and props, and a resource book containing referral options were put together collaboratively by the principal investigator, a brief intervention expert, research assistants, and a project advisor from a nationally funded, local youth dating violence prevention program (i.e., Start Strong). Drafts were shared with youth peer-leaders affiliated with the Start Strong program, field- tested with several patients, and then materials were refined.

(pp. 434-436)

Human subjects protection is salient to all violence research; describing the handling of particular concerns in ADV program evaluations can help provide examples of best practices, establish practice norms, and ensure readers that research protocols are protective of human subjects. Considerations include assent practices with minors, mandated child-abuse reporting requirements (including state and setting-based restrictions around what is reported, to whom, and under what circumstances), confidentiality, and assessments of safety' and risk, among others. Common publishing practices request inclusion of language around institutional review board approval and description of consent/assent procedures. However, additional information on human subjects’ protections is often not reported in manuscripts describing ADV evaluations. We encourage researchers to describe specific protections, safety protocols, and checks utilized to uphold and refine ethical practices. For example, Gonzalez-Guarda et al. (2015) described informing and setting expectations among participants around confidentiality and mandated reporting during sessions:

[Prior to treatment,] confidentiality and its limits, including the research team’s role as mandated reporters, were clearly defined to the participants. The group was made aware that any suspicion or statement of physical or sexual abuse and/or neglect would be reported to ensure the child’s safety' and allow tor appropriate assistance of the family.

(p. 5)

Baiocchi et al. (2017) described the handling of participant disclosure of sexual assault victimization:

This intervention was a behavior modification program with a low risk of an increase in harm due to the intervention. Surveys were anonymous, so incidences of sexual assault were only' identified if the participants decided to disclose to the trainers or other research staff. Ujamaa-Africa instructors and researchers are trained to link students who disclose sexual assault to organizations such as Medecins Sans Frontieres and to programs and services provided by Ujamaa-Africa.

(p. 821)

Conclusions

ADV is a sufficiently significant public health concern that a sizable body of research has accumulated to advance knowledge about it, and numerous intervention programs have been developed to try to reduce or prevent it. There is considerable variability' in the methodological rigor of studies conducted to evaluate the outcomes of such programs, and further variability' in the extent to which certain aspects of the methods are reported. To make clearer inferences about the outcomes of RCTs evaluating ADV intervention programs, the field will be advanced by attention to the methodological points covered in this chapter. Moreover, in addition to strengthening the ability to make inferences about the results of such RCTs, reporting fully' and clearly on these areas of research design and methods can provide researchers and service providers a more nuanced understanding of the factors that give rise

to successful — or unsuccessful — intervention efforts.

Critical findings

  • • Reviews of the literature indicate that programs designed to prevent and reduce adolescent dating violence (ADV) tend to have favorable effects on attitudes and knowledge about violence, but evidence of their effectiveness at preventing ADV perpetration and victimization is less compelling; thus, continued intervention research is needed.
  • • A comprehensive search of published articles and gray literature (e.g., empirical theses and dissertations) pertaining to ADV intervention programs through August 2017 yielded 39 articles in which a randomized controlled trial (RCT) was used to evaluate an ADV intervention program.
  • • There is considerable variability in the methodological rigor of studies conducted to evaluate the outcomes of ADV intervention programs, and further variability in the extent to which certain aspects of the methods are reported. Increased rigor and more consistent reporting of key aspects of research methods can help strengthen research and knowledge on ADV prevention.
  • • Examples of exemplary methodological practices for designing, conducting, and reporting ADV research are available in the literature, and are presented in the domains of (1) sample description, (2) description of treatment and control conditions, (3) allocation of participants to treatment and control conditions, (4) assessment of outcomes, (5) participant retention and missing data, (6) data analysis, and (7) considerations prior to initiating an RCT.

Implications for policy, practice, and research

  • • Additional research on adolescent dating violence (ADV) intervention programs is necessary to understand how best to prevent ADV perpetration and victimization.
  • • Since randomized controlled trials (RCTs) are often considered the gold standard design tor making conclusions about effects of an intervention program, it is likely that researchers will be designing RCTs for the purpose of evaluating ADV intervention programs.
  • • To make clear inferences about the outcomes of RCTs evaluating ADV intervention programs, greater attention needs to be directed at research methodology.
  • • When researchers report fully and clearly on their: (1) sample, (2) treatment and control conditions, (3) the allocation of participants to treatment and control conditions, (4) the assessment of outcomes, (5) participant retention and missing data, (6) data analysis, and (7) considerations prior to initiating an RCT, researchers and other consumers of the research literature (service providers, policy makers) can gain a more nuanced understanding of the factors that give rise to successful — or unsuccessful — intervention efforts.

References

Ackard, D. M., Eisenberg, M. E., & Neumark-Sztainer, D. (2007). Long-term impact of adolescent dating violence on the behavioral and psychological health of male and female youth. The Journal of Pediatries, 151(5), 476—481. https://doi.Org/10.1016/j.jpeds.2007.04.034

Baiocchi, M., Omondi, B., Langat, N., Boothroyd, D. B., Sinclair, J., Pavia, L., . . . Sarnquist, C. (2017). A behavior-based intervention that prevents sexual assault: The results of a matched-pairs, cluster- randomized study in Nairobi, Kenya. Prevention Science, 18(7), 818—827. https://doi.org/10.1007/ si 1121-016-0701-0

Baraldi, A. N., & Enders, С. K. (2010). An introduction to modern missing data analyses. Journal School Psychology, 48(1), 5—37. https://doi.org/10.1016/jjsp.2009.10.001 Bartholomew, L. K., Parcel, G. S., & Kok, G. (1998). Intervention mapping: A process for developing theory- and evidence-based health education programs. Health Education & Behavior, 25, 545—563. https://doi.org/10.1177/109019819802500502

BNI ART Institute. (2015). Training in SBIRT. Retrieved from http://www.bu.edu/bniart/sbirt- training-consulting/sbirt-training/

Brown, K., Arnab, S., Bayley, J., Newby, K., Joshi, P, Judd, В.....Clarke, S. (2012). Tackling sensitive

issues using a game-based environment: Serious game for relationships and sex education (RSE). In В. K. Wiederhold & G. Riva (Eds.), Annual review of cybertherapy and telemedicine (pp. 165—171). Amsterdam: I OS Press.

Carver, K., Joyner, K., & Udry, J. R. (2003). National estimates of adolescent romantic relationships. In P. Florsheim (Ed.), Adolescent romantic relations and sexual behavior: Theory, research, and practical implications (p. 23—56). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Centers for Disease Control and Prevention. (2006). Measuring intimate partner violence victimization and perpetration: A compendium of assessment tools. Retrieved from http://stacks.cdc.g0v/v1ew/cdc/l 1402/ Centers for Disease Control and Prevention. (2019). Preventing teen dating violence [Factsheet). Retrieved from www.cdc.gov/violenceprevention/pdf/tdv-factsheet.pdf Cheung, A. С. K., & Slavin, R. E. (2016). How methodological features affect effect sizes in education.

Educational Researcher, 45(5), 283—292. https://doi.org/10.3102/0013189X16656615 Coker, A. L., Bush, H. M., Cook-Craig, P. G., DeGue, S. A., Clear, E. R., Brancato, C. J., . . . Reckten- wald, E. A. (2017). RCT testing bystander effectiveness to reduce violence. American Journal of Preventive Medicine, 52(5), 566—578. https://doi.Org/10.10l6/j.amepre.2017.01.020 Collins, W. A., Welsh, D. P, & Furman, W. (2009). Adolescent romantic relationships. Annual Review of Psychology, 60, 631—652. https://doi.Org/10.1146/annurev.psych.60.l 10707.163459

Connolly, J-, Josephson, W., Schnoll, J., Simkins-Strong, E., Pepler, D., MacPherson, A.....Jiang,

D. (2015). Evaluation of a youth-led program for preventing bullying, sexual harassment, and dating aggression in middle schools. The Journal of Early Adolescence, 55(3), 403—434. https://doi. org/10.1177/0272431614535090

Cunningham, R. M., Whiteside, L. K., Chermack, S. T, Zimmerman, M. A., Shope, J. T, Raymond

Bingham, C.....Walton, M. A. (2013). Dating violence: Outcomes following a brief motivational

interviewing intervention among at risk adolescents in an urban emergency department. Academic Emergency Medicine, 20(6), 562—569. https://doi.Org/10.l 11 l/acem.12151 De La Rue, L., Polamn, J. R., Espelage, D. L., 8c Pigott, T. D. (2017). A meta-analysis of school-based interventions aimed to prevent or reduce violence in teen dating relationships. Review of Educational

Research, 87(), 7-34. https://doi.org/10.3102/0034654316632061 DePrince, A. R, Chu, A. T, Labus, J., Shirk, S. R., & Potter, C. (2015). Testing two approaches to rev- lctnnization prevention among adolescent girls in the child welfare system. Journal of Adolescent Health,

56(2), S33-S39. https://doi.org/10.1016/jjadohealth.2014.06.022 '

Edwards, S. R., 8c Hinsz, V. B. (2014). A meta-analysis of empirically tested school-based dating violence prevention programs. SAGE Open, 4(2), 1—7. https://doi.org/10.1177/2158244014535787 Espelage, D. L., Low, S., Polamn, J. R., 8c Brown, E. C. (2013). The impact of a middle school program to reduce aggression, victimization, and sexual violence. Journal of Adolescent Health, 53(2), 180—186. https://doi.org/10.1016/j .jadohealth.2013.02.021

Espelage, D. L., Low, S., Polamn, J. R., & Brown, E. C. (2015). Clinical trial of second step© middle- school program: Impact on aggression 8c victimization. Journal of Applied Developmental Psychology, 57, 52—()3. https://doi.Org/10.10l6/j.appdev.2014.l 1.007 Exner-Cortens, D., Eckenrode, J., 8c Rothman, E. (2013). Longitudinal associations between teen dating violence victimization and adverse health outcomes. Pediatrics, 131( 1), 71—78. https://doi. org/10.1542/peds.2012-1029

Fay, К. E., 8c Medway, F. J. (2006). An acquaintance rape education program for students transitioning to high school. Sex Education, 6(3), 223—236. https://doi.org/10.1080/14681810600836414

Fellmeth, G. L., Heffernan, C., Nurse, J., Habibula, S., & Sethi, D. (2013). Educational and skills-based interventions for preventing relationship and dating violence in adolescents and young adults: A systematic review. Campbell Systematic Reviews, 9(1), 1—124. https://doi.org/10.4073/csr.2013.14

Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). Thousand Oaks, CA: Sage.

Foshee, V. (1996). Gender differences in adolescent dating abuse prevalence, types, and injuries. Health Education Research, 11(3), 275—286. https://doi.Org/10.1093/her/ll.3.275-a

Foshee, V. A., Bauman, К. E., Arriaga, X. B., Helms, R. W., Koch, G. G., & Linder, G. F. (1998). An evaluation of Safe Dates, an adolescent dating violence prevention program. American Journal of Public

Health, $*(10), 45-50. https://doi.Org/10.2105/AJFH.88.l.45

Foshee, V. A., Bauman, К. E., Ennett, S. T., Linder, G. F., Benefield, T, & Suchindran, C. (2004). Assessing the long-term effects of the Safe Dates program and a booster in preventing and reducing adolescent dating violence victimization and perpetration. American Journal of Public Health, 94(4), 619—624. https://doi.Org/10.2105/AJPH.94.4.619

Foshee, V. A., Bauman, К. E., Ennett, S. T, Suchindran, C., Benefield, T., & Linder, G. F. (2005). Assessing the effects of the dating violence prevention program “Safe Dates” using random coefficient regression modeling. Prevention Science, 6(3), 245—258. https://doi.org/10.1007/slll21-005- 0007-0

Foshee, V. A., Benefield, T, Dixon, K. S., Chang, L. Y., Senkomago, V, Ennett, S. T, . . . Bowling, J. M. (2015). The effects of Moms and Teens for Safe Dates: A dating abuse prevention program for adolescents exposed to domestic violence. Journal of Youth and Adolescence, 44(5), 995—1010. https://doi. org/10.1007/sl 0964-015-0272-6

Foshee, V. A., Linder, F., MacDougall, J., & Bangdiwala, S. (2001). Gender differences in the longitudinal predictors of adolescent dating violence. Preventive Medicine, 32, 128—141. https://doi.org/10.1006/ pmed.2000.0793

Foshee, V. A., Reyes, H. L. M., Ennett, S. T, Cance,J. D., Bauman, К. E., & Bowling, J. M. (2012). Assessing the effects of Families for Safe Dates, a family-based teen dating abuse prevention program. Journal of Adolescent Health, 51(4), 349—356. https://doi.org/10.1016/jjadohealth.2011.12.029

Foshee, V. A., Reyes, H. L. M., Gottfredson, N. C., Chang, L.-Y., & Ennett, S. T. (2013). A longitudinal examination of psychological, behavioral, academic, and relationship consequences of dating abuse victimization among a primarily rural sample of adolescents. Journal of Adolescent Health, 53(6), 723—729. https://doi.org/10.1016/j .jadohealth.2013.06.016

Froot, K. A. (1989). Consistent covariance-matrix estimation with cross-sectional dependence and heteroskedasticity in financial data. Journal of Financial and Quantitative Analysis, 24, 333—355. doi: 10.2307/2330815.

Gardner, S. P., & Boellaard, R. (2007). Does youth relationship education continue to work after a high school class? A longitudinal study. Family Relations: Interdisciplinary Journal of Applied Family Science, 56(5), 490-500. https://do1.0rg/lO.l 111/j. 1741-3729.2007.00476.x

Gonzalez-Guarda, R. M., Guerra, J. E., Cummings, A. A., Pino, K., & Becerra, M. M. (2015). Examining the preliminarv efficacy of a dating violence prevention program for Hispanic adolescents. The Journal of School Nursing, 31(b), 411-421. https://doi.org/10.1177/1059840515598843

Higgins, J., Churchill, R., Tovey, D., Lasserson, T, & Chandler, J. (2011). Update on the MECIR project: Methodological expectations for Cochrane intervention reviews. Cochrane Methods, 2, 2—5. Retrieved from https://injuries.cochrane.org/sites/injuries.cochrane.org/ files/public/uploads/Cochrane%2520 Methods%2520September%25202011 .pdf#page=5

Huber, P. J. (1967). The behavior of maximum likelihood estimates under nonstandard conditions. Paper Presented at the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA.

Jaycox, L. H., McCaffrey, D., Eiseman, B., Aronoff, J., Shelley, G. A., Collins, R. L., & Marshall, G. N. (2006). Impact of a school-based dating violence prevention program among Latino teens: Randomized controlled effectiveness trial. Journal of Adolescent Health, 39(5), 694—704. https://doi.Org/10.1016/j. jadohealth.2006.05.002

Joppa, M. C., Rizzo, C. J., Nieves, А. V, & Brown, L. K. (2016). Pilot investigation of the Katie Brown educational program: A school-community partnership. Journal of School Health, 86(4), 288—297. https://doi.org/10.llll/josh.12378

Jouriles, E. N., Choi, H. J., Rancher, C., & Temple, J. R. (2017). Teen dating violence victimization, trauma symptoms, and revictimization in early adulthood. Journal of Adolescent Health, 6/(1), 115—119. https://doi.org/10.1016/j .jadohealth.2017.01.020

Jouriles, E. N., McDonald, R., Mueller, V, & Grych, ). H. (2012). Youth experiences of family violence and teen dating violence perpetration: Cognitive and emotional mediators. Clinical Child and Family Psychology Review, 15(1), 58-68. https://doi.org/10.1007/sl0567-011-0102-7 Kazdin, A. E. (2017). Research design in clinical psychology (5th ed.). Boston, MA: Pearson.

Kleijnen, J., Gotzshe, P. C., Kunz, R. H., Oxman, A., & Chalmers, I. (1997). So what’s so special about randomisation. In A. Maynard & I. Chalmers (Eds.), Non-random reflections: On health services research: On the 25th anniversary of Archie Cochranes effectiveness and efficiency (pp. 231—249). London: BM(.

Laird, N. M. (1988). Missing data in longitudinal studies. Statistics in Medicine, 7, 305—315. https://doi. org/10.1002/sim.4780070131

Langhmrichsen-Rohling, J., & Turner, L. A. (2012). The efficacy of an intimate partner violence prevention program with high-risk adolescent girls: A preliminary test. Prevention Science, 13(4), 384—394. https://doi.org/10.1007/sl 1121-011-0240-7

Levesque, D. A., Johnson, J. L., Welch, C. A., Prochaska, J. M., & Paiva, A. L. (2016). Teen dating violence prevention: Cluster-randomized trial of teen choices, an online, stage-based program for healthy, nonviolent relationships. Psychology of Violence, 6(3), 421—432. https://doi.org/10.1037/ vio0000049

Lundgren, R., & Amin, A. (2015). Addressing intimate partner violence and sexual violence among adolescents: Emerging evidence of effectiveness. Journal of Adolescent Health, 56(1 S), S42—S50. https://doi. org/10.1016/j.jadohealth.2014.08.012

Macgowan, M. J. (1997). An evaluation of a dating violence prevention program for middle school students. Violence and Victims, 12(3), 223-235. https://doi.Org/10.1891/0886-6708.12.3.223 Mathews, C., Eggers, S. M., Townsend, L., Aaro, L. E., de Vries, P. J., Mason-Jones, A. J., . . . Wubs, A. (2016). Effects of PREPARE, a multi-component, school-based HIV and intimate partner violence (IPV) prevention programme on adolescent sexual risk behaviour and IPV: Cluster randomised controlled trial. AIDS and Behavior, 20(9), 1821-1840. https://doi.org/10.1007/sl0461-016-1410-l McArthur, L. E. (2010). Intimate partner violence, attachment, and coparenting intervention outcomes among Latino teen parents. Retrieved from ProQuest Dissertations Publishing (3419374).

Miller, E., Goldstein, S., McCauley, H. L., Jones, K. A., Dick, R. N., Jetton, J., . . . Tancredi, D. |. (2015). A school health center intervention for abusive adolescent relationships: A cluster RCT. Pediatrics, I35(1), 76-85. https://doi.org/10.1542/peds.2014-2471 Miller, E., Tancredi, D. J., McCauley, H. L., Decker, M. R., Virata, M. C. D., Anderson, H. A., . . . Silverman, J. G. (2012). “Coaching boys into men”: A cluster-randomized controlled trial of a dating violence prevention program. Journal of Adolescent Health, 51(3), 431—438. https://doi.Org/10.1016/j. jadohealth.2012.01.018

Miller, E., Tancredi, D. J., McCauley, H. L., Decker, M. R., Virata, M. C. D., Anderson, H. A., . . . Silverman^. G. (2013). One-year follow-up of a coach-delivered dating violence prevention program: A cluster randomized controlled trial. American Journal of Preventive Medicine, 45(1), 108—112. https:// doi.org/10.1016/j.amepre.2013.03.007

Nahapetyan, L., Orpinas, P, Song, X., & Holland, K. (2014). Longitudinal association of suicidal ideation and physical dating violence among high school students. Journal of Youth and Adolescence, 43(4), 629-640. https://doi.org/10.1007/sl 0964-013-0006-6 Niolon, P. H., Vivolo-Kantor, A. M., Latzman, N. E., Valle, L. A., Kuoh, H., Burton, T, . . . Tharp, A. T. (2015). Prevalence of teen dating violence and co-occurring risk factors among middle school youth in high-risk urban communities. Journal of Adolescent Health, 56(2), S5—S13. https://doi.Org/10.1016/j. jadohealth.2014.07.019

Pacifici, C., Stoolmiller, M., & Nelson, C. (2001). Evaluating a prevention program for teenagers on sexual coercion: A differential effectiveness approach. Journal of Consulting and Clinical Psychology, 69(3),

552-559. https://doi.Org/10.1037//0022-006X.69.3.552 Peskin, M. F., Markham, С. M., Shegog, R., Baumler, E. R., Addy, R. C., & Tortolero, S. R. (2014). Effects of the Its Your Game . . . Keep It Real program on dating violence in ethnic-minority middle school youths: A group randomized trial. American Journal of Public Health, 104(8),1471—1477. https:// doi.org/10.2105/AJ PH .2014.301902

Polamn, J. R., & Espelage, D. L. (2015). Using a meta-analytic technique to assess the relationship between treatment intensity and program effects in a cluster-randomized trial. Journal of Behavioral Education, 24( 1), 133—151. https://doi.org/10.1007/sl0864-014-9205-9 Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models (2nd ed.). Thousand Oaks, CA: SAGE.

Roberts, К. E. C. (2009). An evaluation of the Expect Respect: Preventing teen dating violence high school program. Retrieved from OhioLINK (ohiou1242323117).

Rogers, W. H. (1993). Regression standard errors in clustered samples. Stata Technical Bulletin, 3, 88—94.

Retrieved from http://stata-press.com/journals/stbeontents/stbl3.pdf Rothman, E. F., Stuart, G. L., Heeren, T., Paruk, J., & Bair-Merrit, M. (n.d.). RCT of the Real Talk brief intervention to prevent dating abuse perpetration in adolescent health care settings [Unpublished manuscript]. Rothman, E. F., & Wang, N. (2016). A feasibility test of a brief motivational interview intervention to reduce dating abuse perpetration in a hospital setting. Psychology of Violence, 6(3), 433—441. https://doi. org/10.1037/vio0000050

Rowe, L. S., Jouriles, E. N., & McDonald, R. (2015). Reducing sexual victimization among adolescent girls: A randomized controlled pilot trial of My Voice, My Choice. Behavior Therapy, 46(3), 315—327. https://doi.Org/10.1016/j.beth.2014.l 1.003

Salazar, L. F., & Cook, S. L. (2006). Preliminary findings from an outcome evaluation of an intimate partner violence prevention program for adjudicated, African American, adolescent males. Youth Violence and Juvenile Justice, 4(4), 368—385. https://doi.org/10.1177/1541204006292818 Sargent, K. S., Jouriles, E. N., Rosenfield, D., & McDonald, R. (2017). A high school-based evaluation of TakeCARE, a video bystander program to prevent adolescent relationship violence. Journal of Youth and Adolescence, 46(3), 633—643. https://doi.org/10.1007/sl0964-016-0622-z Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomized trials. Annals of Internal Medicine, /52(11), 726—733. https://doi.

org/10.7326/0003-4819-152-11-201006010-00232

Snijders, T. A. B., & Bosker, R. |. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. London, England: SAGE.

Tarrier, N., & Wykes, T. (2004). Is there evidence that cognitive behaviour therapy is an effective treatment for schizophrenia? A cautious or cautionary tale? Behaviour Research and Therapy, 42(12), 1377—1401. https://doi.Org/10.1016/j.brat.2004.06.020

Taylor, B. G., Stein, N. D., & Burden, F. (2010). The effects of gender violence/harassment prevention programming in middle schools: A randomized experimental evaluation. Violence and Victims, 25(2),

202-223. https://doi.Org/10.1891/0886-6708.25.2.202 Taylor, B. G., Stein, N. D., Mumford, E. A., & Woods, D. (2013). Shifting Boundaries: An experimental evaluation of a dating violence prevention program in middle schools. Prevention Science, /4(1), 64—76. https://doi.org/10.1007/sl 1121-012-0293-2

Taylor, B. G., Stein, N. D, Woods, D, & Mumford, E. (2011). Shifting Boundaries: Final report on an experimental evaluation of a youth dating violence prevention program in New York City middle schools. Retrieved from www.ncjrs.gov/pdffilesl/mj/grants/236175.pdf Ting, S.-M. R. (2009). Meta-analysis on dating violence prevention among middle and high schools.

Journal of School Violence, 8(4), 328-337. https://doi.org/10.1080/15388220903130197 Vagi, K. J., Olsen, E. О. M., Basile, К. C., & Vivolo-Kantor, A. M. (2015). Teen dating violence (physical and sexual) among US high school students: Findings from the 2013 National Youth Risk Behavior Survey. JAMA Pediatrics, 169(5), 474—482. https://doi.org/10.1001/jamapediatncs.2014.3577 van Lieshout, S., Mevissen, F. E., van Breukelen, G., Jonker, M., & Ruiter, R. A. (2016). Make a Move: A comprehensive effect evaluation of a sexual harassment prevention program in Dutch residential youth care. Journal of Interpersonal Violence, 34(9), 1—29. https://doi.org/10.1177/0886260516654932 Vasilenko, S. A., Kugler, К. C., & Rice, С. E. (2016). Timing of first sexual intercourse and young adult health outcomes. Journal of Adolescent Health, 59(3), 291—297. https://doi.Org/10.1016/j. jadohealth.2016.04.019

White, H. (1980). A heteroskedasticity-consistent covariance-matrix estimator and a direct test for heter- oskedasticity. Econometrica, 48, 817—838. https://doi.org/10.2307/1912934 Williams, R. L. (2000). A note on robust variance estimation for cluster-correlated data. Biometrics, 56, 645—646. https://doi.0rg/lO.l 111/j.0006-341X.2000.00645.x

Wolfe, D. A., Crooks, C., Jaffe, P., Chiodo, D., Hughes, R., Ellis, W.____Donner, A. (2009). A school-

based program to prevent adolescent dating violence: A cluster randomized trial. Archives of Pediatrics & Adolescent Medicine, 163(8), 692—699. https://doi.org/10.1001/archpediatrics.2009.69 Wolfe, D. A., Wekerle, C., Scott, K., Straatman, A. L., Grasley, C., & Reitzel-Jaffe, D. (2003). Dating violence prevention with at-risk youth: A controlled outcome evaluation. Journal of Consulting and Clinical

Psychology, 71(2), 279-291. https://doi.Org/10.1037/0022-006x.71.2.279

Wood, L„ Egger, M, Gluud, L. L„ Schulz, K. F, Jiini, P„ Altman, D. G.....Sterne, J. A. C. (2008).

Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: Meta-epidemiological study. BMJ, 336(7644), 601—605. https://doi.org/10.! 136/ bmj.39465.451748. AD

Yom, Y.-H., & Eun, L. K. (2005). Effects of a CD-ROM educational program on sexual knowledge and attitude. CIN: Computers, Informatics, Nursing, 23(4), 214—219. https://doi.org/10.1097/ 00024665-200507000-00009

 
Source
< Prev   CONTENTS   Source   Next >