Step Two: Data Collection
Collecting data as part of a teacher-as-researcher process means: (1) using data that are already collected for another purpose and that are either directly related to the area of interest, and/or (2) engaging in the collection of data that directly relate to the area of interest. In the earlier scenario, the teacher may collect data through multiple approaches to document the frequency of student-initiated responses, to observe students engaged in peer-group tasks; and to interview students one-on-one to listen to their ideas. For another example, consider a teacher who has asked that the parents of her students to check their child’s school account once a week for updates on how their child is progressing, what their child is/has been working on, and what will be coming up in the near term. If the teacher notices that the student has not been turning in homework, the teacher can verify that the parent has logged in (i.e., data already collected for another purpose). If the verification indicates that the parent has routinely checked in, then it is reasonable to consider that the parent knows about the homework. The teacher may then consider other types of data needed to address the issue of the student not submitting homework.
A further example of the process of data collection for addressing issues or concerns is using the data from previously administered summative assessments. For instance, as described in Chapter 4, the differentiated performance assessment administered at the end of a unit of study can be used for multiple purposes: (1) to document student learning, (2) to identify areas for future growth for each student, (3) to reflect on the instruction and the learning activities for areas of improvement (i.e., teacher reflection), and/or (4) for curriculum revision.
For the area of using data for curriculum revision purposes, a separate set of questions should be generated to inform decisions about adjustments that might need to be made. The first step in this process is identifying the specific questions that are answerable by the data obtained from the summative assessment. Once the questions have been generated, the process pivots to ask what changes can be made to the curriculum to address the concerns, issues, or patterns revealed in the summative assessment analysis, and how high a priority each potential change might be. As this process unfolds, additional data may need to be gathered before finalizing any curriculum revisions. For example, data from actual classroom resources (e.g., activities) may be required to determine the degree of alignment between the curriculum, the resources, and the assessment. In this instance, data are not just student performance results but also include the actual materials used during instruction.
Step Three: Data Analysis
The process of data analysis is where the data use process becomes iterative. Once the data analysis stage begins we may realize that: (1) additional data need to be collected or (2) more questions need be asked requiring additional data collection. In any case, questions should uncover the needs, priorities, or resources required to address the area of interest/concern. Common themes that might drive the data analysis phase for differentiated assessment include, among others.
Identifying Student Performance Differences
How do student outcomes differ by groups within a class and across classes? Are there specific areas that need improvement for some students, for all students? For example, when looking over the differentiated performance assessment noted in Chapter 4, an 8th grade social studies teacher asked the following questions: Which of my students scored at the “World Class Traveler” level, the “Travel Agent” level, and the “Traveler-in-Training” level? Of the students scoring at each level, are there performance patterns each score level? For each class, a teacher might look at the highest scores and the lowest scores to determine if there are patterns of differences in students’ performances: Are the highest scores associated with certain groups of students or certain activities or resources with which the students have engaged? Are the lowest scores associated with certain groups of student or certain individual students? Were there common misunderstandings in the group that perhaps reflect perhaps a breakdown in the instruction at a particular time?
Identifying Areas for Improvement
Using the data for improvement can serve two differentiation purposes: (1) improvement for future instruction within a unit or (2) improvement in curricular documents.
Future Instructional Improvement. What aspects of student performance can be optimized for instructional improvement? Were certain content areas or strategies not emphasized in instruction or over-emphasized? Were particular materials more or less associated with student success or failure? Are there specific areas that could be extended for certain students, or all students?
Curricular Improvement. Are there areas in student performance that suggest curriculum revisions are needed or that there are gaps in the content in the curriculum? Are there different strategies, approaches, or resources that should be included in the curriculum?
Once the essential questions are identified and the data analyzed to address the questions, the next step is to make sense of the results of the analysis—the meaning-making stage. This is also the stage where additional questions may be generated to better help understand the area of interest. When this is the case, the processes of data collection and data analysis continue (i.e., becomes iterative).
Step Four: Meaning-making: Information Generating
The purpose of the meaning-making process is to generate information—to combine the results of the analyses with an understanding of the situation (e.g., climate environment, knowledge about students)—for the purpose of taking actions to address the identified area of interest. In the Chapter 4 example from the differentiated performance assessment, while the data analysis stage gave the teacher a picture of student progress, it did not tell her what, if anything, to do to modify instruction or the curriculum moving forward. As an example, after administering the differentiated performance assessment, the teacher wanted to examine which students consistently scored above and below the targeted performance level. The teacher graphed the data into three groups: (1) those that scored above the target (“World Class Planner”), (2) those that scored below the target (“Traveler-in Training”), and (3) those that scored near or around the target (“Travel Agent”). In reviewing the results and taking into consideration the curriculum, the resources that were used during instruction, and the students’ needs, the teacher decides that students who scored in the “World Class Planner” range need additional levels of challenge during the learning cycle. As a result, she examined aspects in her instruction and resources that she could modify to provide extended challenge for advanced learning in future classes. For students in the “Travel Agent” range, because this range indicates success with the targeted goals, she will make no changes, and for students below the targeted range, she will use a different instructional approach and some different resources with continuation of data collection to monitor the students’ movement toward the identified learning targets.
Step Five: Action Planning
With information in hand focused on the targeted areas of interest, the next step in the data use process is to plan the action that is needed to address the issue or concern, or in other words, what is needed to implement solutions resulting from the sense-making, information-generating stage.
Let’s consider the earlier example using the performance assessment from Ms. Caledron’s class. Ms. Caledron was making sense of the data from the students’ responses on the performance assessment for different purposes in order to take specific actions. First, she summarized each student’s level of proficiency related to the targeted learning goals—she used the rubric to assign student grades based on their performance on the assessment. Second, she considered the results of the assessment from a formative stance, analyzing students’ achievement on the performance assessment. What she learned from the students’ levels of proficiency had implications for the upcoming unit and provided her with insights about how she would differentiate for the students’ needs. Third, she used the results from the students’ performance assessment responses to analyze the alignment between the curriculum, assessment, and instruction to identify areas in which there were gaps or areas that needed to be enhanced in a future iteration of the curriculum. The area of interest identified for investigation determines the types of responses required. Whether for instructional improvement, addressing specific students’ needs through differentiated instruction, or curriculum revising, each purpose requires different types of actions.
Step Six: Reflecting on Actions
Reflection should be on-going throughout each of the steps so that revisions can be made during implementation if necessary, but the final phase of the Teacher as Researcher process is for you to systematically reflect on the degree to which the action plan addressed the identified interest. This stage is where you review what happened, determine if the actions were effective, and make decisions about future steps to be taken.