ACT Research on Comparability

The ACT has been offered as a linear computer-based test to a limited number of states and districts conducting school-day testing. In spring 2017, approximately 81,000 students tested on computer, including 43% who used a Chromebook (Z. Cui, personal communication, May 8, 2017). Research on mode comparability primarily has focused on differences in screen size or content displayed within the same device (e.g., laptop, desktop). In an early study comparing monitor size differences, Bridgeman, Lennon, and Jackenthal (2003) found that verbal scores were 0.25 standard deviations lower when the amount of reading content displayed on screen was reduced. When less content is displayed on the screen, more scrolling is required; this may increase the demand on short-term memory and cognitive load (Sanchez & Goolsbee, 2010) and require additional time for similar performance. This issue may be most prominent when dual passages or multiple graphics (e.g., tables, figures) are present or where items and stimuli are not displayed on the same screen. Such differences in displays have been cited as a source of construct-irrelevant variance, which could provide an advantage for paper over computer administration and larger displays over smaller displays (Bridgeman, Lennon, & Jackenthal, 2001; Chaparro, Shaikh, & Baker, 2005).

Comparability across Digital Devices

Devices in Schools

Numerous digital devices are employed in large-scale educational assessment. In K-8, tablets have become the preferred device; laptops are still preferred by high school educators (Pearson, 2015 and Deloitte, 2016). Table 8.1 illustrates students’ device preference and usage in schools across grades as reported in two different studies. In the first study, tablet use was reported to be

110 • Wayne J. Camara and Deborah J. Harris Table 8.1 Device preference by school grade

Grade

“Which of the following devices do you regularly use at school?” (Pearson, 2015)

“If you had to pick only one device at school, which device would it be?” (Deloitte, 2016)

Tablet (%)

Laptop/

Chromebook (%)

Smartphone (%)

Hybrid

“2 in 1” (%)

Tablet (%)

Laptop/

Chromebook (%)

K-2

78

66

53

10

53

15

3-5

36

26

6-8

69

71

66

8

30

29

9-12

49

76

82

9

25

37

highest in elementary schools, with 78% of elementary students versus 49% of high school students indicating that they regularly used a tablet; laptop and Chromebook usage was reported to be highest in high school (Pearson, 2015). A second survey of student preferences shows a similar pattern, with a stronger preference for tablets in earlier grades and a moderate preference for laptops and Chromebooks in high school (Deloitte, 2016). Tablets and laptops each come with a variety of screen sizes and operating features, not to mention the increased popularity of Chromebooks and the entry of mobile devices for instructional assessment (Deloitte, 2016). Mobile devices were identified as the number one workplace trend in the Society for Industrial Organizational Psychology’s top-ten trends in 2015 (SIOP, 2015). In preemployment testing, mobile devices are nearly synonymous with unproctored internet-based tests; their increased popularity is traced to the desire to assess talent anytime and anyplace as well as growth in mobile device ownership (Arthur, Keiser, & Doverspike, 2018).

Bring Your Own Device (BYOD) paradigms have been cited as the next biggest trend in education, but unlike preemployment testing, educational assessment has prescribed minimum requirements (e.g., screen size, operating systems, security features). A typical BYOD implementation may require students to register devices with a school to gain access to software and content. BYOD also seems ideally suited for learning assessments, which require more frequent interactions. Alternatively, they are rarely used for situations that necessitate making norm- referenced interpretations, as is the case with many summative assessments. If BYOD efforts gain even more popularity and acceptance in education, there will be increased pressure to relax existing requirements on technolog)', which generally prohibit small screens or mobile devices. As is evident from this discussion, technological advances increasingly will challenge concepts and assumptions of standardization in assessment. However, the result of such conflicts cannot be easily dismissed when assessments seek to support claims of score comparability. Greater flexibility in digital delivery, input, and interactions creates differences in user experiences and performance, which could increase disparities among students (Sager, 2011).

 
Source
< Prev   CONTENTS   Source   Next >