Write Items to Measure Claims and Targeted Performance Standards

The ECD approach is in line with Samuel Messick’s directive: “The nature of the construct being assessed should guide the selection or construction of relevant tasks, as well as the rational development of construct-based scoring criteria and rubrics” (1994, p. 20).

One of the benefits of the ECD framework is that it supports multiple item types. It has been used by the College Board in the development of Advanced Placement (AP) examinations, which include multiple-choice, short-constructed response and extended-constructed response items. Recently, researchers at the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) used this approach in development of a simulation-based assessment (Mislevy, 2011). One of the keys is to match the type of task to the specifications articulated in the claims and performance standards.

As others have noted (e.g., Steinberg et al., 2003), evidence of the skill integrated with the content is a critical piece that is missing from traditional learning goals. Having evidence that describes this integration provides teachers with guidance for designing instruction that develops both the content and skill in the context of each other. The ECD artifacts provide a level of detail that supplies teachers with clear targets for instruction as well as assessment because the activities involved in each emphasize the importance of defining the targeted understanding in a way that captures the use of the knowledge, and not just discrete concepts or facts. Clear learning goals in the form of claims and evidence can be beneficial to students as well because students will know exactly what is expected of them. This can foster metacognition, if instruction permits, because the students will know their goals and can reflect on their understanding of the goals (Bransford et al., 1999).

Sample item specification table

Figure 1.1 Sample item specification table.

“Task models” (also known as item specifications) describe how to structure the kinds of assessment situations needed to obtain the kinds of evidence specified in the claims and domain analysis. They describe the task material that is presented to the examinee and the expected work products to be generated in response and include features of tasks, such as the general content, appropriate stimuli, the expected interaction between the student and the task, the expected type of response and possible misunderstandings by the student (Mislevy, Almond & Lukas, 2004). These features are useful to item writers and form developers to ensure the proper mix of tasks is included in the assessment. In the Smarter Balanced consortium, task models were designed to elicit evidence for a given claim or set of claims about an assessment target. Figure 7.7 provides a snapshot of a partial annotated item specification table for Smarter Balanced.

 
Source
< Prev   CONTENTS   Source   Next >