Oral Reading Fluency is NOT a Measure of Reading Fluency
by Jan Hasbrouck on March 26, 2014
“What the what?” as Liz Lemon of TV’s 30 Rock might say. Oral reading FLUENCY doesn’t measure reading FLUENCY? How can that be? Well, the answer is … it’s complicated.
To fully explain, we should probably begin by revisiting what the Oral Reading Fluency (ORF) measure really is. We have to begin back in the early 1980s, when a team of researchers and doctoral students at the University of Minnesota began to explore the idea that simple measures of academic performance could potentially serve as indicators of the academic competence of a student at a particular point in time—and then, perhaps, also be used to monitor the trajectory of skill development over a period of time (Jenkins & Fuchs, 2012).
The researchers began to refer to these assessments as curriculum-based measures, or CBM. One of the measures that attracted attention was initially called RAFT: Reading Aloud From Text in a fixed time. Does that sound similar to the now widely used measure called Oral Reading Fluency? RAFT was the original version of what we now call ORF. Today, more than 30 years later, research has clearly demonstrated that counting the number of words read aloud correctly in one minute from standardized passages is an excellent measure of general reading proficiency, including reading comprehension (Jenkins & Fuchs, 2012; Wayman, Wallace, Wiley, Ticha, & Espin, 2007).
When these CBM measures were first developed, they were not widely used outside of special education. But that changed rather dramatically when the federal government created Reading First (RF) with a mission to put proven methods of early reading instruction in classrooms. RF required participating schools to use scientifically based reading research—and the proven instructional and assessment tools consistent with this research—with the goal of having all children learn to read well by the end of third grade.
RF schools were required to assess students to determine their reading achievement compared to established benchmarks at least three times per year, using assessments that had proven reliability and validity. Many RF schools began using various commercially available versions of ORF to fulfill this requirement (including DIBELS®, aimsweb, Reading Fluency Benchmark Assessor, and easyCBM™).
Rather suddenly lots of schools were using the ORF assessment, and many educators working in those schools became confused. They often wondered: “How can we be asked to rely on a very short measure of a single, isolated reading skill (fluency) to determine proficiency in the highly complex task of reading? Isn’t comprehension much more important?” (Hamilton & Shinn, 2003). The answer to this reasonable and important question has three parts:
#1: We must remember that ORF is designed to only serve as one, single “indicator of the academic competence of a student at a particular point in time.” It is analogous to a thermometer, which is also a scientifically developed measure that can be used quickly to determine how someone’s body temperature compares to an established benchmark. But, like ORF, a thermometer reading cannot be used diagnostically. A lack of a fever does not “prove” or “guarantee” that someone is healthy (a broken leg or heart disease may not cause a fever), and conversely a high fever of 103 degrees could be caused by many different illnesses or conditions. With an ORF measure we can take a student’s academic “temperature” but must be cautious in interpreting the result.
#2: Yes, of course, we should always keep our focus on students’ ability to comprehend what they read. But we must remember that there is now a large body of research that has established, as odd as it may seem, that when an ORF is correctly administered and the results correctly interpreted, it does in fact work quite well as an “indicator” of overall reading skill level, including comprehension.
#3: A third response that could be given to those many confused educators is to explain that ORF, in fact, does NOT measure the skill of fluency! Reading fluency is a highly complex skill that involves accuracy, rate, and expression. It requires the intricate weaving of multiple skills and underlying components (Hasbrouck & Glaser, 2011). It cannot be measured using only one 60-second sample. So, although we call this CBM measure an assessment of reading fluency, that is NOT what it is assessing.
Perhaps we could have saved much angst and agitation if the original CBM researchers had simply kept the original name: RAFT (Reading Aloud From Text in a fixed time). Or how about IRP as a better name for this measure: an Indicator of Reading Proficiency?
ORF has a valuable place in our toolkits as professional educators, but we must understand what it really measures, how to administer and interpret it correctly, and to use it only for its intended purpose.
Fuchs, L. S., Fuchs, D., Hosp, M. K. & Jenkins, J., (2001). Oral reading fluency as an indicator of reading competence: a theoretical, empirical, and historical analysis. Scientific Studies of Reading, (5)3, 239-256.
Hamilton, C., & Shinn, M. R. (2003). Characteristics of word callers: An investigation of the accuracy of teachers’ judgments of reading comprehension and oral reading skills. School Psychology Review, 32(2), 228-240.
Hasbrouck, J., & Glaser, D. R. (2011). Reading fluency: Understanding and teaching this complex skill. Gibson, Hasbrouck & Associates. www.gha-pd.com
Jenkins, J. R., & Fuchs, L. S. (2012). Curriculum-based measurement: The paradigm, history, and legacy. In C. A. Epsin, K. L. McMaster, S. Rose, and M. M. Wayman (Eds.), A measure of success: The influence of curriculum-based measurement on education, pp. 7-23. Minneapolis, MN: University of Minnesota Press.Wayman, M., Wallace, T., Wiley, H. I., Ticha, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. Journal of Special Education, 41, 85-120.