This page intentionally left blank
DETERMINING CONTENT AND COGNITIVE DEMAND FOR ACHIEVEMENT TESTS
Marianne Perie and Kristen Huff
Toward the end of the 20th century, as state and federal accountability systems required reporting the percentage of students meeting a specified target on an examination, K—12 assessments necessarily moved to criterion-referenced specifications. This shift meant redesigning assessments that would discriminate reliably and validly among levels of performance rather than discriminate among students in a normative fashion. Changing the purpose and use of the assessments has slowly led to a conceptual adjustment in developing test specifications, focusing on the idea that the proficiency target and claims regarding the degree of student knowledge and skill should be the primary driver of all test design and task1 specification decisions. Thus, assessment design has moved from ensuring broad coverage of discrete content areas (e.g., numbers and operations, functions, measurement) and skill areas (e.g., identify, describe, analyze) to understanding exactly what educators or policy makers want to say about student knowledge and skills and then developing a set of items that elicit evidence to support such assertions. Test design is more clearly linked to the development of key knowledge and skills when we move away from a broader context of domain sampling to determine domain mastery and move closer to a more specified context of asking “where along this performance trajectory does this student most likely belong at this moment in time?” The performance trajectory is articulated via the performance level descriptions (PLDs), which are in turn informed by educator understanding of how students progress, research-based learning progressions and cognitive models of learning.
This chapter will explore issues related to developing educational assessments using evidence- centered design, with a focus on categorizing student performance into one of four to five performance levels. Thus, this chapter emphasizes using PLDs and assessment claims to drive the content and cognitive demand of the achievement tests. Riconscente and Mislevy (this volume) provide an introduction to evidence-centered design (ECD); the focus in this chapter is how to use such an approach to determine the content and cognitive demand for achievement tests. Although there is no prescribed approach or recipe to use ECD, ECD is a set of principles and tools that facilitate coherent assessment design and development. In this chapter, we will describe a generalized ECD approach that draws heavily on examples from the College Board’s Advanced Placement (AP) exams, the Race to the Top Assessment Consortia and the Principled Assessment Design Inquiry (PADI) projects. Starting with an analysis of the domain, which includes consideration of how students learn and develop knowledge and skills in the domain as well as learning progressions or maps, this chapter describes the process of parsing out the full set of knowledge and skills, prioritizing what is to be assessed (i.e., the targets of measurement), developing PLDs and drafting assessment claims. Each of these pieces facilitates the process of outlining and then fleshing out test specifications and item-writing protocols that guide test developers in designing construct-relevant items that elicit evidence for placing students along a trajectory of performance. Finally, this chapter concludes with a discussion of benefits and challenges to this approach of starting from the idealized final product to determine the entry points of the assessment design.