Section 1: Basic Concepts Required for Generating Constructed- and Selected-Response Items

Cognitive Model Development

Cognitive Model Development: Cognitive Models and Item Generation

Cognitive models have always been used to create test items—the only question is whether these models were specified implicitly or explicitly during the development process. Item development occurs when an SME creates a task that can be solved using different types of knowledge, skills, and competencies within a particular content area (Keehner, Gorin, Feng, & Katz, 201 7; Leighton & Gierl, 2007). The concepts, assumptions, and logic used by SMEs to create and solve this content-specific task are based on their mental representation or mental model (Center & Stevens, 1983, 2014; Johnson-Laird, 1983). In some cases, the SME mental representation may also include assumptions about how examinees are expected to solve the task (Gierl, 1997). A cognitive model in educational testing is considered to be implicit when an item is created without documenting the process used or without capturing any type of formal representation of the concepts, assumptions, or logic used to create the item. In this case, the cognitive model that describes how a content expert and/or an examinee solves problems resides in the mind of the SME. Items developed with implicit cognitive models are not replicable.

Alternatively, the concepts, assumptions, and logic used by SMEs to describe how content-specific tasks are created and solved can be made explicit by documenting the process and by using a formal representation. A cognitive model in educational testing can be defined as a description of human problem solving on standardized educational tasks that helps characterize the knowledge and skills examinees at different levels of learning have acquired in order to facilitate the explanation and prediction of examinee test performance (Leighton & Gierl, 2007). This model organizes the cognitive- and content-specific information so that the SME has a formal, structured representation of how to create and solve tasks on tests. Items developed with a cognitive model are replicable because the information used to produce the item is clearly specified in a model using content that is explicit and detailed. Because the content is explicit and detailed, it can be evaluated to determine if the item is addressing a specific and intended outcome on the test.

The purpose of AIG is not to produce one unique item—as with traditional item development—but many diverse items. As a result, a cognitive model for AIG provides the important first step in an item generation workflow because it contains the specifications required to create large numbers of diverse items (Gierl, Lai, & Turner, 2012). These specifications can include the content, parameters, constraints, and/or instructions that will be used to control the behaviour of the model during item generation. In addition to task creation, the model can be used to describe test performance in cognitive terms by identifying the knowledge and skills required to elicit a correct response from the examinee, which, in turn, can be used to make inferences about how examinees are expected to solve tasks generated by the system. In other words, by specifying the content, parameters, constraints, and/or instructions used to generate items and by identifying these variables in cognitive terms using specific knowledge and skills, we can describe the knowledge and skills that examinees are expected to use when solving the generated items. This modelling approach can also be used to explain why examinees select specific incorrect options when selected-response items are generated. In short, a cognitive model for AIG is an explicit representation of the task requirements created by the SME which is used to generate items and to describe how examinees are expected to solve the generated items. Cognitive models for AIG are not readily available because the task and cognitive requirements are specific, numerous, and, often, unique to each testing situation. As a result, these models must be created by the SME, often from scratch, for each test. Because of the important role these cognitive models play in the item generation and the test score validation process, they should also be thoroughly evaluated.

Benefits of Using Cognitive Models For AIG

There are three benefits of using a cognitive model for AIG. First, the cognitive model identifies and organizes the parameters, constraints, and/or instructions required to control the item generation process. The variables are described using cognitive- and content-specific information within a formal structured representation (Embretson & Gorin, 2001). The SME must identify the content and the conditions required to generate the items. This content is then used by the computer-based assembly algorithms described in step 3 of the AIG workflow to produce new items. Therefore, one practical purpose of the cognitive model is to specify the cognitive and content features that must be manipulated to produce new items. As the number of features in the cognitive model increase, the number of generated items will increase. As the source of the features varies, the types of generated items will vary. Hence, both quantitative and qualitative characteristics can be manipulated in the cognitive model to affect generative capacity.

Second, the cognitive model can be used to make inferences about how examinees are expected to think about and solve items because it provides a structured representation of the content, parameters, constraints, and/or instructions used to create the task. By identifying the knowledge and skills required to generate new items, this cognitive description can also be used to account for how examinees are expected to select the correct and incorrect options from the generated items produced using specific types of knowledge and skills (see Leighton & Gierl, 2011, for a review). Hence, the cognitive model could be considered a construct representation that not only guides item generation but also tests interpretation (Embretson, 1983, 1999, 201 7). Test scores anchored to cognitive models should be more interpretable because performance can be described using a specific set of knowledge and skills in a well-defined content area because the model is used to produce items that are generated to directly measure content-specific types of knowledge and skills.

Third, the cognitive model is an explicit and formal representation of task-specific problem solving. Therefore, the model can be evaluated to ensure that the generated items yield information that addresses the intended purpose of the test. In traditional item development, the SME is responsible for identifying, organizing, and evaluating the content required to create test items. The traditional approach relies on human judgement acquired through extensive training and practical experiences. Traditional item development is also a subjective practice because an item is an expression of the SME's understanding of knowledge and skill within a specific content area. This expression can be characterized as an implicit mental representation distinct for each SME and, therefore, unique for each handwritten item. Because the cognitive representation is implicit, distinct, and unique for each item, it is challenging to replicate and evaluate. Alternatively, the content, parameters, constraints, and/or instructions in a cognitive model for AIG are explicit and structured using cognitive- and content-specific information in a formal representation. Cognitive model development for AIG also relies on human judgement acquired through extensive training and practical experiences. But because the model is explicit, it can be replicated by SMEs. Because the model is explicit, it can also be scrutinized by SMEs and, if need be, modified to address inadequacies. Once evaluated, the model serves as a generalized expression for how tasks can be generated, as well as how examinees are expected to solve these tasks. This expression can be used immediately for creating content or archived for future use.

 
Source
< Prev   CONTENTS   Source   Next >