Exclusions and Inclusions
We include studies using measures of physical aggression and violence at the individual level (aggregate data excluded). We include “externalizing” only if the authors use a subscale clearly measuring physical aggression. This means the Aggression Subscales from the Child Behavior Checklist and Behavior Assessment System for Children were excluded because they have many items that do not reflect acts of physical aggression. We did not use studies where a proxy measure of physical aggression, such as the Taylor Aggression Paradigm, was used. We did not consider measures of psychopathy to be synonymous with violence. We did not use studies that employed a measure of “violence risk” unless we could ascertain that the measure was, essentially, a measure of actual violent behavior. The Youth Self Report Aggression subscale includes aggressive thoughts and acts, in addition to behavior, and we opted to include it due to its clear emphasis on violence.
We excluded from the table studies comparing violent offenders to sex offenders (Guay, Ouimet, & Proulx, 2005) or sex offenders compared to others, unless they compared violent sex offenders to nonviolent sex offenders. We did not include studies of “sexual aggression” only. We excluded studies using only drug addicts, mentally handicapped individuals, or psychiatric inpatient samples, but we included studies where the sample of offenders had been referred for psychological evaluation as this is common practice in correctional samples and is a typical route by which IQ tests are administered to inmates.
An important set of exclusions involves certain comparisons in studies that otherwise fit our criteria. In some studies, authors employ a strategy whereby early correlations are used to develop the final multivariate models. In some cases, they provide coefficients for a limited number of “best” regression models. Because these papers do not present all the steps in their analyses, the importance of the coefficients in the final models displayed is probably exaggerated, if considered without knowing the outcome of the analyses that were not presented. For example, if an author begins with three possible independent variables, and one of them remains in the “best” model, it would look, for the purposes of our vote count, as if 100% of the analyses returned a statistically significant coefficient in the right direction, when, in fact, there were probably at least two models where the coefficients were not statistically significant and therefore not reported using this modeling approach. The “missing” analyses presumably include non-significant coefficients, but we cannot make this assumption, if the authors do not explicitly report the information in the table or in text. Thus, we generally excluded coefficients coming out of this analytic approach out of concern that their inclusion would bias our tallies. Unfortunately, similar procedures for model specification are commonly used and perhaps not always reported, so some of the studies we do include here may contribute to this type of bias. If the authors merely noted that a comparison was “not statistically significant,” we did count that comparison as “null.”
We included comparisons where measures of intelligence, executive functioning, and cognitive ability were used as independent variables. We did not include behavioral measures of inhibition or self-control (though we do include measures of cognitive control). We did not include purely perceptual motor measures of brain functioning such as the finger tapping test, the Purdue Pegboard test, or the Star Tracing test. We also excluded measures of motor function, rhythm, and tactile function. Measures of “cognition” or “problem solving” and the like had to relate to fundamental ability, not the content of the cognitions (such as hostile attributions) or nature of the problem solving strategy itself to be included. Although some authors incorporate a measure of arithmetic in their tests of EF, we include all the math tests in our tables for the Chapter 5, which addresses education factors. We did not include intervention studies unless comparisons between the independent variables of interest here and outcomes were reported (not just the intervention itself). In other words, because we are mainly interested in the children’s native executive functioning, intervention participation is not seen as a relevant independent variable.
Deciding how to organize studies of executive functioning was somewhat problematic2. In their meta-analytic review, Ogilvie et al. (2011) explain that many measures of EF involve complex, multifaceted tasks drawing upon multiple processes (not just memory or planning for example). They opted to estimate effect sizes for each test, but this resulted in a very low number of comparisons (k = 1 in many cases). So we have opted to use gross categorizations when possible (memory, planning) as well as a combined “executive function” category for studies that combined scores or used tests that appear to combine multiple skills.
The final tables are broken down into 10 categories (see Appendices A and B for the list of studies by category):
Full Scale IQ Planning
Verbal Ability (including Verbal IQ) Problem Solving Ability Performance Intelligence Attention
Deviation Scores (PIQ—VIQ) Cognitive Control
Other Cognitive Tests
Other Measures of Executive Function