How Do You Organize Large Numbers of Generated Items?

AIG is an exercise in logistics as much as item development. With the potential for producing thousands of new items, a systematic approach is needed to create and organize this valuable new content resource. AIG produces items that are anchored to the problem and scenario panel in the cognitive model (see the answer to the question, "How do you ensure that the generated items are diverse?"). As a result, to generate the content for an entire test, cognitive models that address each outcome in the test specifications must be created by the SME. These cognitive models will be used to guide the generation process in order to produce items that satisfy the outcomes in the test specification. A test is then created by sampling from the generated items for each outcome in the test specifications. To generate items for each outcome in the test specifications, an exploratory or confirmatory approach can be used. By exploratory we mean that parent items are first identified, and then item models are created, generation is conducted, and, finally, the generated items are content coded. The logic of the exploratory approach is that AIG is used to generate large numbers of items, and then the generated items are content coded so that they can be aligned with the outcomes in the test specifications. Exploratory AIG is analogous to exploratory factor analysis in which the technical analysis is conducted first (i.e., the items are statistically associated with factors), and the content analysis is conducted second (i.e., the substantive meaning of the factors is determined by the content specialists). A confirmation approach can also be used (see Gierl, Lai, Hogan, & Matovinovic, 2015). By confirmatory we mean that the content in the test specifications for the parent items is specified first, item models that measure these content specifications are created second, and, finally, items that measure these content specifications are generated last. The outcome of this process is that the generated items are "pre-aligned" to the test specifications because of the careful attention devoted to content alignment during the creation of the item models. Confirmatory AIG is analogous to confirmatory factor analysis in which the content analysis is conducted first (i.e., the substantive meaning of the factors is determined) and the technical analysis is conducted second (i.e., the items are statistically fit to the content-defined content analysis). Gierl et al. (2015) demonstrated how confirmatory AIG could be used to generate math items that were carefully aligned to the Common Core State Standards in Mathematics. The benefit of a confirmatory approach is that the generated items are aligned with the content in the test specifications. No additional content coding or alignment is required after the items are generated, unlike in the exploratory approach.

Once the items are generated, and they contain content codes using either the exploratory or confirmatory approach, the items must be banked. As we noted in Chapter 1, current practices are grounded in the unit of analysis of the test item. Each item is individually written, reviewed, revised, and edited. An item bank, therefore, serves as an electronic repository for maintaining and managing information on each item. The maintenance task focuses on item-level information. Every characteristic of the item must be coded, such as the format (e.g., multiple-choice, numeric response, written response, linked items, passage-based items), content (e.g., test specification categories, item identification number, field test number, date, source of item, item sets, copyright), SME attributes (e.g., year the item was written, SME name, SME demographics, editor information, development status, review status), and quantitative characteristics (e.g., word count, readability, item history). The management task focuses on person-level information. That is, item bank management requires explicit processes that guide the use of the item bank. Many different people within a testing organization are involved in the development process, including the SMEs, psychometricians, editors, graphic artists, word processors, and document production specialists. Many testing programs field-test their items and then review committees evaluate the items prior to final test production. Hence field-tested items are often the item bank entry point. Rules must be established for who has access to the bank and when items can be added, modified, or removed. The same rules must also apply to the preparation of the final form of the test because field testing can occur in a different unit of a testing organization or at a different stage in the development process and, therefore, may involve different people.

Models serve as the unit of analysis in AIG. One cognitive and item model is written, reviewed, revised, and edited. This model can then be used to generate large numbers of items. A model bank serves as an electronic repository for maintaining and managing information on each model. Because the model is the unit of analysis, the banks contain an assortment of information on every model but not on every item. For this reason, the banking process is simplified. The maintenance task focuses on model-level information. For example, the format of the cognitive and item models must be coded. Content fields must be coded. SME attributes must be coded. Quantitative characteristics of the model are coded.

The model bank could also contain coded information on the cognitive and item model ID, model name, expected grade levels for use, model stem type, model option type, number of constraints for the model, the number of elements in the model, and the number of generated items. The management task focuses on person-level information. Model bank management requires explicit processes that guide the use of the bank. As with the traditional approach to item development, many different people within a testing organization are involved, including the SMEs, psychometricians, editors, graphic artists, and word processors. An additional specialist could also be involved—the AIG model bank developer. This specialist has a basic knowledge of the test development process but is also highly skilled in computer programming and database management. The AIG model bank developer's role is to bridge the gap between the SME who creates the models and the programming tasks necessary to generate items and maintain information in the model bank. The AIG model bank developer would be responsible for helping the SME to implement the three-step AIG method, validating the models, entering the cognitive and item models into generation software, executing the software for item generation, sampling the generated items as they are required for test development, and maintaining the contents of the bank. These responsibilities would apply to each content area and grade level where AIG is used.

 
Source
< Prev   CONTENTS   Source   Next >