Research Question 3: What Factors Affected Teachers' Use of Student Scores on the Summative ELP Assessments for Instructional Purposes?
Timing of the Score Reports
One of the biggest challenges that teachers faced in using the summative ELP assessments to inform their instruction was the late timing of the score reports: Schools usually received the score reports at the end of the current school year or at the beginning of the following year. All of the teachers stated that they could not use the scores when their students had moved to another class or school. Although teachers took into consideration the scores on the summative ELP assessment to help make placement decisions, they rarely employed them to guide their teaching and student learning because the scores were outdated and did not necessarily reflect the students’ current ELP levels. A K-l ESL teacher in Illinois explained:
I do not use them [the scores] for almost [any] purpose. The scores I get don’t really tell me ... where the kids were ... what their actual needs are. If I am sitting down and we are doing writing, I can see what your actual needs are. [With the scores,] you are just handed a piece of paper saying, there are one, two [students] at this level and three, four [students] at that level, and that doesn’t tell me what I need to teach them next.
Three teachers commented that the summative ELP assessment was useful when they had the same students during the following academic year because they could track their progress and see where they had grown or where they still needed to grow. They could also use this information to determine the areas in which their students needed to improve and to decide if they needed to spend additional time working in a specific domain. A K-5 pull-out ESL teacher in North Carolina commented.
It [the ELP assessment] does [have an impact] for me because I have the same kids. I do the entire elementary school. So, any student that doesn’t exit, I have them again the next year. So, I always get a spreadsheet that will show what their growth is, and I’ll look at what areas they’re struggling more with .... I will try to work extra on that so they can exit whenever the next test comes around.
However, the other 15 teachers felt the summative ELP scores were not very useful even when they continued with the same students because they thought that they knew their students better than what was reported on the assessment. A K—5 ESL teacher in New Jersey explained that the scores on the summative ELP assessment were not very useful because by the time he got the scores, he already knew his students’ current ELP development: “The information is outdated .... I just got my scores at the end of... last week [June]. It’s not useful to me because I already know these students. I know where they are now.”
Lack of Detail in the Score Reports
Another issue was the type of information provided on the score reports. Although the teachers had access to summary and individual student reports for their EL students, all of them thought this information was not detailed enough to guide their instruction. The teachers said that they could use the subscores to determine in which domains the students needed to improve, but that they did not receive information on students’ specific language skills in each of the domains. In order to use the summative ELP assessment to guide their instruction, 10 teachers said that they would like to receive item-level data (i.e., which items the students answered correctly or incorrectly) and the students’ actual responses on the speaking and writing subtests. An ESL teacher in New Jersey who taught Grades 7-10 described the type of information that she needed to guide her instruction:
So we can see ... what students are scoring ... on the different language skills .... We can adjust our instruction individually ... towards the students who are lacking those skills. But it doesn’t give us ... the idea of exactly what kind of a skill within reading comprehension or within writing, they were struggling with.
Score Reports Not Meaningful to Students
Although the teachers wanted to share the students’ ACCESS scores with their students to inform them about their own learning and language development, they didn't think that the students would understand the information. Eight teachers complained that the reports were not written in a friendly language that EL students could understand. Sixteen teachers mentioned that there were no instructions provided on how to help EL students to understand and interpret the score reports. Nonetheless, 12 teachers reported that they did meet with their students to discuss their performance on the summative ELP assessment and to provide information about where they needed to improve. However, they could not pinpoint the specific skills that their students needed to work on.
Another limitation was the amount of time that elapsed between test administration and teacher-student meetings to discuss test scores. Many of the students did not remember what was on the test. A seventh-grade ESL teacher in Illinois pointed out, “It’s like over a year ago for them; they have forgotten. They’re like, ‘What test did we take?’ So, again, it’s just not meaningful for them to go over those scores."
Validity Concerns about the Speaking Scores
Eight teachers expressed concerns about the speaking scores on the online summative ELP assessment. Currently, the online ELP assessment requires students to record their responses by speaking into a microphone. The teachers did not think that the student scores on this subtest were very accurate because they did not always reflect the students’ true speaking skills. They attributed this to several factors, including the online administration of the test. They pointed out that it was difficult for many students to speak to a computer. The teachers also reported that the speaking tasks did not elicit long responses and, therefore, did not provide opportunities for students to demonstrate their speaking skills. An ESL teacher in New Jersey who taught Grades 3-8 explained:
There is a lot of complaining statewide about the scores being ... substantially inaccurate in the speaking [domain] .... The speaking [subtest] was by far the weakest section [for] almost all students, including students who ... anecdotally [we] wouldn’t have expected speaking to have been the problem. It just seems like that particular domain was scored lower than the other.
The teachers explained that it was better when they personally administered the speaking subtests because they could prompt their students to continue speaking or ask follow-up questions, which provided more meaningful information about the students’ speaking strengths and weaknesses. In addition, when the teachers administered the speaking subtests, they did not have to wait until the following year to learn what types of speaking difficulties their students had. They could incorporate this information immediately into their lesson planning. An ESL teacher in New Jersey explained why she preferred administering the speaking part of the assessment herself:
Before the ACCESS was online, the instructor, the teacher, myself, would do the speaking part of the test .... you can prompt them to continue speaking or you can ask additional questions. I just feel it was better when it was in person, not online .... The information was more meaningful and useful whereas ... right now, [it’s] just ... a computer ... asking them a question, and they’re answering in a microphone. Previously, I could say, ‘Is there anything else you’d like to add?’ Like I can prompt them to speak more, which should be ... a more accurate assessment of their speaking skills.