Passive Self-Report Data

The growth in the use of social media and online resume databases creates new opportunities to gather biodata through the use of advanced technology, such as web crawlers, automatic scanning and data mining. Such approaches are already in use in staff recruiting in business (e.g.,, and are likely to spread over time to other sectors, including education, and for other uses. Related to this are attempts to collect behavioral data as indicators of noncognitive skills. For example, Novarese and Di Giovinazzo (2013) used the time taken between being informed of acceptance and registering as an indicator of procrastination, which was found to predict school performance. Tempelaar, Rien- ties and Giesbers (2014) measured study effort by time logged in, number of exercises completed and participation in an online summer course.

Ratings by Others

Ratings of a target (e.g., an applicant, a worker, a student) by others can and often do take the form of a simple rating scale, such as the standard 5-point Likert-type scale. An example is ETS’s Personal Potential Index (PPI) (Kyllonen, 2008), which is designed to supplement the GRE (for this instrument, the categories are “below average," “average," “above average," “outstanding [top 5%]" and “truly exceptional [top 1%]"). Evaluators rate graduate school applicants on six dimensions (knowledge and creativity, communication skills, teamwork, resilience, planning and organization, ethics and integrity), four items per dimension (e.g., “supports the effort of others" “can overcome challenges and setbacks,” “works well in group settings”), and provide an overall rating. (See, which provides items, score reports and background information.)

Several meta-analyses have been conducted in the past few years showing that ratings by others on average yield higher correlations than self-ratings and that they add additional incremental validity over self-ratings for predicting educational and job success (Connelly & Ones, 2010; Oh, Wang & Mount,

2011). In addition, the research suggests that the better the evaluator knows the target, the better the judgment, but for many traits mere casual acquaintance (as opposed to interpersonal intimacy) is sufficient for improving on self-ratings in predicting future behavior. A large-scale predictive validity study of the PPI, sponsored by ETS and the Council of Graduate Schools, involving several thousand graduate students from six universities is currently underway. Participating departments are requiring PPI scores for admissions and are providing student outcomes for evaluation (see Klieger, Holtzman & Ezzo, 2012); results will be available soon.

Anchoring vignettes could be used for ratings by others in the same way as they are used for selfratings, but there is little if any published research on using anchoring vignettes this way Nor is there any on using forced-choice methods in ratings by others. One reason is that both methods—anchoring vignettes and forced choice—take longer to complete than rating statements. And the problems they are designed to solve—particularly faking and reference bias—are largely addressed by methods that rely on others’ ratings.

Another popular technique for collecting ratings by others is the behaviorally anchored rating scale (BARS), a rating scale with behavioral anchors at various points along the scale to provide additional meaning for the score points (or rating categories). The anchors are often obtained through the collection of critical incidents (Flanagan, 1954), which are typically subject-matter-expert-generated examples of the display of some trait or behavior, such as Teamwork or Leadership. Shultz and Zedeck (2011) illustrate the approach in developing BARS for 24 dimensions of Lawyering Effectiveness. For example, a BARS measure for the factor “Analysis and Reasoning” anchors the 0 to 5 rating scale with the following descriptors (this is a subset of all the behavioral anchors for this scale). The number refers to the anchor’s location on the 0 (poor) to 5 (excellent) scale:

  • 4.4: Extracts the essence of complex issues and doctrines quickly and accurately.
  • 3.7: Assesses whether a case is precisely on point, discerns whether an analogy holds, and conveys exactly why a case is distinguishable.
  • 1.9: Responds only to the immediate question before him or her; avoids broader framing of the issue and resists expanding the stated focus.
  • 1.3: Over-simplifies arguments, misses possible sub-issues and nuances, and fails to anticipate the opposing side’s points.
  • (Shultz & Zedeck, 2011, p. 638)
< Prev   CONTENTS   Source   Next >