Progress Monitoring

Progress monitoring has garnered much less attention than universal screening in the educational research community. This may be due to the fact that evaluating change in individual students is deceptively complex. Often, within multi-tiered systems of support (MTSS) or response to intervention (RtI) frameworks, students that are identified as at-risk for later difficulties receive supplemental interventions. To gauge the effects of these interventions, and to determine whether a student is showing sufficient progress or another intervention should be attempted, data are periodically collected and plotted on time-series graphs. Educators than use visual analysis and/or decision rules to determine whether a change needs to be made or the student is on track to reach a future goal.

The QuALITY lab has been engaged in multiple lines of research to improve progress monitoring practices within MTSS and RtI frameworks. These lines of inquiry include analytic and data collection practices to improve the reliability (i.e., precision) of growth estimates, as well as methods to improve evaluations of student progress.

Analytic Strategies to Improve the Reliability and Precision of Progress Monitoring Outcomes

This line of research has employed extant data analyses and simulations to better understand factors that promote the reliability or precision of growth estimates from progress monitoring data. When collecting time-series data each observation contains measurement error that obscures a student’s “true” rate of improvement or growth. The first form of measurement error relates to the “bounce” of observations around a line of best fit. If data were perfectly reliable, each observation would fall perfectly along a straight line. In reality, data are often noisy things irrelevant to improvement in the target skill influences scores, like in the graphs below:

Highly variable data make trends less likely to discern and make it difficult to determine whether an intervention is actually having the effect we think it is. In addition, the bounce of observations leads to less precise estimates of growth (or the stability of the line of best fit):

Here the dotted lines represent a confidence interval that one can draw around the line of best fit. In effect, based upon the imprecision of the observed trend line, it is difficult to conclude whether the student is actually improving or not in response to the intervention! A number of factors influence the precision of growth estimates from progress monitoring assessments including but not limited to: a) the number of weeks data are collected, b) the number of data points collected, c) the type of measure used to assess student performance, d) the degree to which standardized procedures were followed when collecting data, e) the statistical method used to summarize growth, and f) the presence of extreme values or outliers in the data series. The QuALITY lab has been engaged in research to quantify the influence of each of these factors on the precision of growth estimates as well as strategies to minimize error when collecting progress monitoring data.

 

Methods to Evaluate Progress Monitoring Outcomes

Visual Analysis

After data are collected, plotted, and summarized, educators must make a decision (continue the intervention as designed, or make a change). One approach, visual analysis, requires educators and school psychologists to evaluate progress by visually inspecting student performance, often in conjunction with graphic aids (including goal lines and trend lines). Unfortunately, agreement between visual analysts regarding whether a student is showing progress is not always high. The QuALITY lab has investigated characteristics of data series (e.g., variability) and graphs (e.g., presence of visual aids) that influence the accuracy of visual analysis. As one example, different test vendors use different graphing styles to present data. In conjunction with Drs. Dart, Radley, and Klingbeil, members of the QuALITY lab investigated how evaluations of student progress differed when the same data were plotted using different graphing conventions. Results suggest that the likelihood of determining that an intervention was in fact working differed across graph types amongst visual analysts, despite the fact that they were evaluating the same data.

Decision Rules

In addition to visual analysis, researchers have developed automated decision rules to assist with evaluating student progress. One type, the data point rule requires an educator or school psychologist to evaluate the most recent 3, 4, or 5 data points in reference to an expected weekly rate of improvement or goal line. If all of the observations fall below the goal line, the educator considers making a change (the student is not improving enough). If all observations fall above the goal line the educator considers increasing the goal (the student is improving and needs a more ambitious goal). If the observations fall above and below the goal line, the educator continues the intervention and collects another data point and revisits the rule.

Another type of decision rule involves estimating a line of best fit, or trend line through all of the observations collected. Then the slope (i.e., average rate of improvement per week) of that line is compared to an expected rate of growth (or goal line). If the slope of the trend line is less than the slope of the goal line a change is considered. If the slope of the trend line is greater than the slope of the goal line the intervention is continued and the slope of the goal line is increased. If the slope of both lines approximate one another the intervention is continued and another data point is collected before applying the rule again.

The QuALITY lab has been engaged in research to determine practices that influence the accuracy of recommendations from these and novel decision rules using extant analyses and simulation methodology. In particular we have investigated the influence of duration data are collected, the number of observations available, the type of decision rule used, and the type of assessment used on the accuracy of resulting recommendations. Much of this work has been conducted in collaboration with Drs. Nelson and Parker, from ServeMinnesota, as part of a national AmeriCorps program that delivers supplemental reading and math interventions to elementary students.

 

Cognitive Biases

In the newest line of research being undertaken in the lab, we have begun investigating the presence and determinants of common cognitive biases when evaluating  progress monitoring graphs. There is a rich history of research on the role cognitive biases play in the assessment process within school psychology. In particular, decades of research has explored the influence of sociodemographic features of students in framing school psychologists evaluations of the presence of various disabilities. Unfortunately, research on factors that may influence decision making practices within MTSS or RtI frameworks has not kept pace. As an example, in one study we are exploring whether educators and school psychologists are more susceptible to the sunk-cost fallacy when deciding whether to change an intervention if they had a role in selecting which intervention the student received versus being told what intervention would used with the student. In another investigation we are exploring whether exposing educators and school psychologists to data that suggest a student is initially struggling will lead to anchoring effects that cause them to discount subsequent improvements in performance. This line of research is novel to the lab and we look forward to better understanding which cognitive biases influences progress monitoring decisions.

 

Please note that the QuALITY lab is always looking to partner with educators to address applied problems that they face (e.g., is this progress monitoring measure performing how we think it is performing?), and have collaborated numerous times in publishing those findings in peer-reviewed studies. Through working with local school districts in the Lehigh-Valley and beyond we hope to make inroads at addressing the research to practice gap. Please feel free to contact us if you would like help in evaluating your progress monitoring practices and pursue engaging in an applied research project.