Distractor Generation: The Importance of the Selected-Response Item in Educational Testing

An item model identifies the parts of an assessment task that can be manipulated for item generation. These parts include the stem, the options, and the auxiliary information. The stem contains context, content, and/or the question the examinee is required to answer. The options include a set of alternative answers with one correct option and one or more incorrect options or distracters. The stem and correct option are required for constructed-response item models. The three-step method presented in Chapter 2, Chapter 3, Chapter 4 can be used to generate these types of items. Constructed-response items are used to measure important cognitive skills such as critical thinking and problem solving that are often considered challenging to evaluate using the selected-response item format because examinees are expected to construct rather than select their response. The process of creating a response by writing the answer is important for assessing thinking and reasoning skills. Writing also serves as one of the essential 21st-century skills required to effectively communicate, reason, and collaborate (Ananiadou & Claro, 2009; Binkley, Erstad, Herman, Raizen, Ripley, Miller-Ricci, & Rumble, 2012; Chu, Reynolds, Notari, & Lee, 2017; Darling-Hammond, 2014). However, constructed-response items that require examinees to produce a written response are also costly to administer and challenging to score because they rely on human raters. When examinees construct their response to an item, these responses must be scored by SMEs. The SMEs who score construct-response items must first be recruited, they require extensive training, and their performance must be monitored to ensure that the scores are produced consistently. Well-established literature exists on constructed-response scoring using human raters (e.g., Bejar, Williamson, & Mislevy, 2006). Alternative methods to human scoring are also beginning to emerge where computers can be used to supplement human raters to improve score consistency and reduce the time required to score examinees' responses (Shermis & Burstein, 2013; Shermis, Burstein, Brew, Higgins, & Sechner, 2016; Williamson, Mislevy, & Bejar, 2006).

The selected-response item format is often used to bypass the need for human scoring, thereby increasing the efficiency of the testing process and improving score consistency. A selected-response item permits the direct measurement of a broad range of knowledge, skills, and competencies at different levels in the education system, for a wide range of purposes, and in many different subject areas. As a result, it remains the most widely used item format in educational testing (Haladyna & Rodriguez, 2013; Rodriguez, 2016). Students write hundreds of tests and answer thousands of selected-response items as part of their K-12 educational experience. In higher education, selected response remains one of the most widely used item formats for measuring students' knowledge, especially in introductory and large courses. Selected-response item formats are also used extensively for international assessments. For example, in the 2018 administration of the PISA, two-thirds of the items were administered in the selected-response format. Selected-response items are efficient to administer, easy to score, and can be used to sample a wide range of content domains in a short time. Dr. Stephen Downing (2006), a recognized expert in the areas of item and test development, asserted that selected-response items are the most appropriate format for measuring cognitive achievement or ability, especially higher-order cognitive skills, such as problem solving, synthesis, and evaluation. He also claimed that this item format is both useful and appropriate for creating exams intended to measure a broad range of knowledge, ability, or cognitive skills across many domains.

To generate selected-response items using AIG, the stem, the correct option, and the incorrect options are required. When answering a selected-response item, the examinee is presented with a stem and two or more options that differ in their relative correctness. Examinees are required to make a distinction among response options, several of which may be partially correct, in order to select the best or most correct option. Hence examinees must use their knowledge and problem-solving skills to identify the relationship between the content in the stem and the correct option. The incorrect options are called distractors because they may distract examinees from the correct option with features that seem plausible when considered with only partial knowledge.

 
Source
< Prev   CONTENTS   Source   Next >