Using Assessment to Make Instructional Decisions

At the heart of Response to Intervention is early identification of students who are at risk of academic failure. The problem-solving model provides an efficient and effective framework to assess students' academic functioning and to use the assessment data to inform and evaluate instructional practices and interventions (Deno & Mirkin, 1977). The five basic steps in the data-based problem-solving model are the following: 1) problem identification, 2) problem analysis, 3) intervention planning, 4) plan implementation, and 5) progress monitoring and plan evaluation. Figure 2.1 gives a visual representation of these important steps developed by Rhode Island Department of Elementary and Secondary Education (2010).

The problem-solving model serves many important functions in a school. First and foremost, it provides an organizational structure that guides teams in their efforts to maximize student success. The five steps mentioned above help teams evaluate school-wide data, prioritize goals, and formulate plans to help all students. The purpose of this chapter is to provide a general overview of the problem-solving process as it relates to instructional decisions in mathematics. There are numerous, high-quality books that go into much greater detail about the technical aspects of educational assessment. See the e-resource for a list of resources containing in-depth information about assessing students' understanding of mathematical skills and concepts.

One of the biggest shifts in current educational practice is the shift to using data to inform instructional decisions in the classroom setting. In the past, providing struggling students with additional support had been heavily dependent on teacher recommendations. Over the past five years, greater emphasis has been placed on using objective academic data to guide instruction and interventions in the classroom. Schools are now using universal screening, benchmarking, and progress monitoring to assess student outcomes, as well as assess the effectiveness of classroom curriculum and instruction. While schools are collecting more data, there is still a gap between collecting data and using data to inform educational decisions. By following the steps of the problem-solving model, educators can ensure that they are identifying and addressing student academic needs in the most targeted and effective way. In this chapter, we will discuss the steps of the problem-solving model as a framework for school-based teams and individual educators to use data to guide their instruction and supplemental interventions.

Figure 2.1 Problem-Solving Model

Source: Rhode Island Department of Elementary and Secondary Education. (2010). Rhode Island Criteria and Guidance for the Identification of Specific Learning Disabilities. Providence: Rhode Island Department of Elementary and Secondary Education. Used with permission.

Step 1: Problem Identification

What is the difference between what is expected and what is happening?

To answer this question, we must use the first step in the problem-solving process to identify both those students who are on track and those students who need additional support to be successful. This requires schools to identify local criteria for what is considered adequate performance and the cut-off for what is considered "at risk."

Universal screening provides a comprehensive "sweep" of all children in the school to identify students who need additional support in foundational skills. This sweep enables schools to analyze the effectiveness of the core curriculum and identify which students need additional support.

Screening requires an assessment that is generally inexpensive, easily administered and scored, and provides reliable data on critical skills (number sense, quantity discrimination, etc.). The skills being assessed should have high predictive validity, meaning the students' performance on the subskills provides meaningful data regarding future success in that domain. For example, a student who struggles with quantity discrimination and identifying missing numbers is at risk for future challenges in mathematics. Typically, schools conduct school-wide or universal screening two or three times per year. For students who are performing adequately in their classes and on these screening measures, this frequency is sufficient. Other students, who are struggling or who score in the at-risk range on the universal screening, need to be monitored more frequently. This topic will be discussed in more detail later in the chapter under Plan Evaluation.

Core Program Evaluation

One of the main purposes of universal screening is to evaluate the effectiveness of the core curriculum. When schools collect data on all students, rather than analyzing student data in isolation, it is easier to identify trends in student performance across grade levels. Assuming the core curriculum is being implemented with fidelity (meaning all teachers deliver the instruction and curriculum the way they were designed), we can assess how well the curriculum teaches the requisite skills across grade levels and classes. For example, in analyzing the universal screening data for quantity discrimination at the second-grade level at School X, we can see if the curriculum effectively addresses this concept. If we find that a high percentage of students in multiple second-grade classrooms score poorly on the universal screening assessment for quantity discrimination, we could logically deduce that additional time and instruction need to be added to the core curriculum in this specific area. If a small percentage of students score poorly, we can conclude that the core curriculum is adequately covering the concept of quantity discrimination for the majority of the students. It should be noted that the appropriateness or adequacy of the core curriculum also depends on the students receiving instruction in that curriculum. Since students' background knowledge and mastery of skills will vary from year to year, it is possible that the core math curriculum adequately meets the academic needs of the students in some years, but that in other years supplemental instruction or materials may need to be added to the core curriculum. By using universal screening data to assess the effectiveness of the core curriculum, school leaders can ensure that all the students are receiving quality and effective instruction in Tier 1.

Identifying Struggling Learners

The main purpose of screening all children in the school is to identify the students who are performing adequately and those who are at risk for academic failure. Schools use various assessments to evaluate how students are performing academically. Some examples of universal screeners are curriculum-based measurement (CBM), statewide assessments (Illinois Assessment of Readiness, Texas Assessment of Knowledge and Skills, California Smarter Balanced Summative Assessments), and other informal standards-aligned assessments. While each of these assessments provide valuable information about the student's performance, it is paramount that multiple sources of data are used to determine a student's need for additional academic support. Collecting data from multiple sources allow educators to confirm that an academic issue really exists across settings and time and is not simply a single piece of data that may or may not represent the student's actual academic functioning. Two key features of a universal screening tool are sensitivity and specificity. The sensitivity of the screening tool refers to how accurately it identifies students who are at risk (true positives), while the specificity refers to how well the tool identifies students who are not at risk (true negatives). In Figure 2.2, an "ideal" screen is depicted by showing that all students who are at risk and all students who are not at risk are accurately identified. Additionally, in the ideal screen, no students are incorrectly identified as being at risk or not at risk. This graph in Figure 2.2 would indicate that the measure is very accurate in identifying struggling learners.

If we can reliably identify students who need additional support and those students who are performing adequately without additional support, we can be more efficient and effective in delivering targeted explicit instruction and interventions.

After educators administer the universal screening and then collect and analyze the student performance data, it is important for teachers to monitor the progress of their students throughout the year. Students who performed adequately on the screening assessment and are considered at or above benchmark only need to be monitored three times per year.

Figure 2.2 The Ideal Screen

Source: Adapted from Hosp, Hosp, & Howell, 2016.

Students who are identified as at risk and require supplemental support in mathematics should be monitored monthly to ensure that the interventions and additional support effectively assist them to make adequate progress toward the benchmark. Those students identified as needing "intensive" support should receive more explicit small-group support in addition to Tier 2 services; these students should be monitored at least every two weeks (ideally weekly). The general rule of thumb about how frequently to monitor student progress is this: the more severe or intensive the need, the more frequently progress should be monitored. The National Center on Intensive Interventions provides information on selecting instruments for universal screening and progress monitoring (https://charts.mten- siveintervention.org/chart/progress-monitoring). Its website contains reviews of several assessment measures, which are summarized in an easy-to-read "Tools Chart." Students who are identified as struggling will need additional support. For these students, we move to the next step in the problem-solving process: problem analysis.

 
Source
< Prev   CONTENTS   Source   Next >