EDVIEW360 Logo
Blog Series

Progress Monitoring: A Tool to Drive Instruction or Just More Testing?

Dr. Kelly A. Powell-Smith
Dr. Kelly A. Powell-Smith
Dr. Kelly A. Powell-Smith

Kelly A. Powell-Smith, Ph.D., NCSP, Professor of Reading Science at Mount St. Joseph University. Dr. Powell-Smith is the former Chief Science Officer at Acadience Learning. Dr. Powell-Smith is the lead author on Acadience RAN, Acadience Reading Survey, Acadience Reading Diagnostic assessments, and Acadience Spelling. She obtained her doctorate in school psychology from the University of Oregon. She has served as an Associate Professor of School Psychology at the University of South Florida, faculty associate of the Florida Center for Reading Research, and consultant with the Eastern Regional Reading First Technical Assistance Center. She currently serves on several editorial boards including School Psychology Review, School Psychology Forum, and Single-Case in the Social Sciences. Her work has been cited in more than 200 professional journals. Dr. Powell-Smith has provided training in assessment and intervention in 23 states and Canada and conducted 285 national, state, and regional workshops and presentations. 

Updated on
Modified on August 28, 2025

Progress monitoring is said to be the heart of a Multi-Tiered System of Supports (MTSS) framework for educational service delivery, yet I have found it to be an underutilized tool. Might this be because of questions about progress monitoring and how to best leverage it to improve outcomes? 

There are several questions I often hear asked about progress monitoring. I plan to address them here to help educators better leverage the instructional utility of progress monitoring so that it is not relegated to the status of “just more testing.” Before doing so, let’s revisit what progress monitoring is and why it is important.

What Is Progress Monitoring?

Progress monitoring is a method for quickly and efficiently gauging change in something of importance. For example, if diagnosed with high blood pressure, you might monitor your blood pressure as a means to observe changes in response to medication or dietary intervention. In education, we use progress monitoring to gauge if students are making sufficient progress to meet desired goals. Real-time educational decisions are facilitated by progress-monitoring data, much in the same way a GPS signals a driver to make real-time adjustments to their driving route when they have gone off course. Key components of progress monitoring include: (a) a goal to be monitoring progress toward, (b) ongoing data collection, (c) graphing and reviewing data, and (d) applying decision rules. As such, progress monitoring is not just about collecting data!

Why Is Progress Monitoring Important?

Progress monitoring is important for at least two reasons. First, progress monitoring provides the essential data necessary for decision-making within MTSS, where close monitoring and data-driven decision-making in regard to the next best instructional steps to take, including movement between tiers of support. Second, progress monitoring is important because it can positively impact student outcomes. The work of John Hattie (2009; 2017) tells us progress monitoring1 is among the most powerful educational tools positively impacting student outcomes. Student outcomes are improved when: (a) meaningful, ambitious, and attainable goals are established; (b) feedback is provided to students and teachers on progress toward goals; (c) graphing and decision rules are employed; and (d) feedback includes information about what and how to change instruction (Fuchs & Fuchs, 1986; Fuchs, Radkowitsch, & Sommerhoff, 2025; Hattie, 2009, 2017). Progress monitoring helps to ensure effective instructional practices are continued and ineffective instructional practices are adjusted or discontinued. 

With the purpose and importance established, I’d like to address some common questions I often hear about progress monitoring.

For Whom Is Progress Monitoring Important?

Progress monitoring is important for all students. While you might think is unrealistic and far too time-consuming, bear with me for a moment. Different types of progress monitoring are conducted depending on the level of concern. So, what are those different types? First, we have periodic monitoring, often referred to as benchmark assessment or universal screening, which is conducted typically three times per year. Second, is frequent monitoring. This monitoring is the kind that occurs on a weekly, biweekly, or monthly basis (i.e., between benchmarks). Each of these types of monitoring serve a different purpose.

Benchmark monitoring serves as a check on the progress of all students in a grade level, but perhaps more importantly, it provides a check on the health of Tier 1 instruction. It allows educators to explore the question of whether the Tier 1 system of support is meeting the needs of most students (e.g., 80 percent). We can also examine the effectiveness of our Tier 2 and 3 systems of support by determining whether we are closing achievement gaps and reducing risk for students receiving those intervention supports.

Frequent monitoring is used to tell us if an individual student (e.g., student receiving intervention support) is on track to meet their goals and responding sufficiently to intervention. One question I often hear is, how frequently should this monitoring occur? Generally speaking, the frequency of progress monitoring should match the level of concern. For students with the greatest needs (e.g., those receiving Tier 3 support), weekly monitoring is best. For students about whom we are less concerned (e.g., those receiving Tier 2), every other week or once per month may be sufficient. When odd patterns in the data are observed, consider collecting additional data to make better decisions. For example, you might decide to collect more data in each progress-monitoring session (e.g., 3 passages for oral reading fluency) or you might decide to move to weekly monitoring for some time. 

What Materials Should Be Used?

The overarching principle when selecting materials is that the data provided should help you answer key questions, such as, is our Tier 1 system helping most students meet grade-level goals, or is this student making progress toward their goals? Progress-monitoring tools should have specific characteristics including:

  • Sufficient reliability and validity for progress monitoring purposes
  • Sensitivity to small changes in student performance
  • Utility for repeated assessment (e.g., alternate forms of equivalent difficulty available)
  • Efficiency - brief, low-cost indicators of a broader essential skills
  • Easily understood by educators

For benchmark monitoring, grade-level materials are used. For frequent monitoring, decisions about materials are more nuanced. First, the progress-monitoring materials should be an indicator of skills targeted by the intervention. For example, if we are working on building fluency with connected text, then oral reading fluency is an indicator of improvement. Second, student skill level should be considered. 

For many students receiving intervention support, the appropriate progress-monitoring materials will be below their grade level. We want materials that are at the right level of challenge, but not too challenging. When material is too difficult, it is unlikely to show student learning. Materials that are too difficult may result in student frustration and practicing undesirable mistakes. One way to determine the just-right level is to survey back or test down in the materials to find the student’s progress-monitoring level. Acadience® has a product designated explicitly for this purpose, Acadience® Reading Survey.

How Do We Know How Much Progress Is Enough?

I often hear educators ask how much progress is reasonable. There are several ways to gauge progress, each with its own benefits and drawbacks. This table details some of these.

Table 1

Benefits and Drawbacks for Various Means of Gauging Student Academic Progress

Means to Gauge ProgressBenefitsDrawbacks
Norms
  • National: Decisions are anchored to how a national sample of children perform
  • Local: Decisions are anchored to how students in local environment are performing 
  • National norms may not be representative of local context
  • Local norms may not represent adequate progress compared to a broader sample (comparison could be too narrow)
  • May not represent performance that places the odds in a student’s favor of future success
Rate of Improvement (ROI)
  • Provides a week-by-week expectation for growth, typically anchored to some normative expectation
  • Often takes into account a student’s initial skills (starting point)
  • The basis for interpreting progress is often by using slope compared to ROI expectation. Slope is problematic due to its unreliability. Some research suggests that minimally stable decisions about progress using slope can only be made after three or more months of data collection—too infrequent to be of practical benefit.
Benchmarks (e.g., Acadience Benchmarks)
  • They are research-based and criterion-referenced, thus linked to important outcomes
  • If a goal is reached, it’s likely the student will meet future goals
  • They help you to know the odds or likelihood or meeting future expectations
  • May not consider normative expectations (what is possible)
  • Does not take into consideration initial skills
  • May be more challenging to determine ambitious and attainable goals for students at performance extremes (e.g., very high or very low skills)
Growth Percentiles (e.g., Acadience Pathways of Progress)
  • Provides a normative index for gauging progress helping to understand what’s possible and what’s ambitious
  • Takes into account the student’s initial skills (starting point)
  • Requires professional judgment regarding an appropriate level of ambitiousness

Here are some recommendations for educators to consider. First, use benchmarks for Tier 1 monitoring from one benchmark to the next. At midyear, examine the percent of students who began the year at benchmark to determine how many stayed at benchmark. Are at least 95 percent of those students still at or above benchmark midyear, and at end of year?

Second, use a combination of benchmarks and Pathways of Progress for monitoring students receiving intervention support (e.g., Tier 2 or Tier 3). Are students reducing their risk and closing achievement gaps (i.e., reaching benchmark or a lower level of risk)? Also, consider how students are performing compared to other students who began the year with the same initial skills—are they making below typical, typical, or above typical progress in comparison? To sufficiently close achievement gaps, students who are below and well below benchmark need to make at least above-typical progress. 

As you examine progress monitoring data, keep in mind that we want student growth to be meaningful and ambitious, but attainable. These three characteristics, described in Table 2, need to be balanced.

Table 2

Desirable Characteristics of Individual Student Learning Goals

MeaningfulAmbitiousAttainable
  • Increase the odds of future reading health
  • Represents growth that results in achieving meaningful outcomes or increases the chances of achieving meaningful outcomes in the future. For example, getting to proficient reading at or above benchmark or reducing risk (i.e., moving from well below benchmark to below benchmark).
  • Sufficient progress to close achievement gaps or exceed what is typically expected
  • Research suggests goal ambitiousness is more important than attainability
  • Students rise to meet expectations
  • Other students with similar initial skills have made that much progress 

Ultimately, progress monitoring should result in decisions that enable educators to improve outcomes for students. Progress monitoring should provide instructionally relevant and timely information to inform instruction. When progress monitoring yields these results, it is valuable and not just more testing.

Wondering What To Use For Intervention? 

Voyager Sopris Learning has a number of high-quality, research-supported materials available (e.g., LANGUAGE! Live®, REWARDS®, and Sound Partners).

 

References

Fuchs, A., Radkowitsch, A. & Sommerhoff, D. (2025). Using learning progress monitoring to promote academic performance: A meta-analysis of the effectiveness. Educational Research Review, 46, https://doi.org/10.1016/j.edurev.2024.100648

Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53(3), 199-208. https://doi.org/10.1177/001440298605300301

Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.

Hattie, J. A. C. (2017). Visible learning plus 250+ influences on student achievement.

https://visible-learning.org/wp-content/uploads/2018/03/VLPLUS-252-Influences-Hattie-ranking-DEC-2017.pdf

    Want More Education 
    Thought Leadership?

    Subscribe to EDVIEW360 to gain access to podcast episodes, webinars, and blog posts where top education thought leaders discuss hot topics in the industry.