Aligning Assessment to Meet Schoolwide Literacy Needs
“The proof is in the pudding.” This phrase might conjure up a variety of images, but it is not about dessert. Instead, it traces back to the 14th Century and has to do with the outcomes of eating a particular food called pudding that was made up of various meats and spices. Specifically, the phrase refers to the proof of the food being good was in the outcome of eating it. Likewise, the proof that we are making wise assessment choices in schools is in the outcomes we achieve through their use. It is not enough for assessments to be designed for a specific purpose and technically adequate (i.e., reliability and validity in the traditional sense). The consequences of assessment must also be examined. Messick (1989; 1995) refers to this as consequential validity.
A great deal of time and resources in education are expended on assessment. As such, it’s worth spending some time thinking about the return on that investment and how to maximize that return. In particular, if we wish to operate within a Multi-Tiered System of Support (MTSS), we will want to consider how our assessments fit into and inform decisions to be made within that framework. As a framework for service delivery, MTSS provides a broad, but unified, means of aligning school systems and resources aimed at the success of all students across multiple areas (e.g., academics, behavior, and social-emotional functioning). With this as our goal, we should consider how our assessments serve as catalysts for taking actions, and more specifically, actions that improve student and educator outcomes. Good assessment empowers educators to make decisions leading to actions resulting in improved outcomes.
Let’s take a closer look at these action steps, the types of assessments that may inform those actions, and the qualities those assessments should have with respect to informing schoolwide literacy decisions. Good decisions from assessment data are more likely to occur when one operates within a decision-making framework. Assessment choices should be driven by the decision you are trying to make and the questions you are trying to answer. In our work at Acadience® Learning, we use a model called the Outcomes-Driven Model (ODM), which includes these five decision-making steps:
- Identify Need for Support
- Validate Need for Support
- Plan and Implement Support
- Evaluate and Modify Support
- Review Outcomes
Every step of the ODM has accompanying questions to be answered relative to individual students as well as systems (e.g., classroom, grade level, school, etc.) (see Table 1). The steps of the model fit very nicely with an MTSS approach to service delivery and a focus on improving outcomes for all students.
Within the ODM model, we need to consider the types of assessments necessary to address the questions to be answered. There are four general types of assessment frequently used in schools typically associated with an MTSS service delivery model: screening, diagnostic, progress monitoring, and outcome assessments. So, what are the purposes of these assessments, the questions they were designed to answer, desirable characteristics, and some specific examples?
The Outcomes-Driven Model Steps and Questions
Are there students who may need support? How many students may need support?
Which students may need support? Who are they?
Are we reasonably confident in the accuracy of our data overall?
Are we reasonably confident that the identified students need support?
At what grade levels and/or in what areas may support be needed? What are our systemwide goals? What is our systemwide plan for support?
What are the student’s skills and needs? What is the plan of support for the student, including goals and plan for progress monitoring?
Are we making progress toward our systemwide goals? Is our system of support effective?
Is each student making adequate progress? Is the support effective for individual students?
Have we met our systemwide goal? Is our system of support effective? Are there students who may need support? How many students may need support?
Has the support been effective for individual students? Has the individual learning goal been met for each student? Which students may need support?
The purpose of screening assessments is to quickly determine who is on track to meet important outcomes and who may need additional support beyond high-quality, evidence-based, core instruction to reach those outcomes. As such, these assessments are designed to answer the following questions:
- Are there students who may need additional support?
- Which students are in need of support? Who are they?
- How many students need additional support?
- Are we confident that the student(s) need support?
Furthermore, screening assessments should have these qualities:
- Efficient and inexpensive
- Strong indicator of broader skill
- Universal, given an MTSS framework is designed to support all students
- Provide information about who is on track and who is likely in need of support
- Strong technical adequacy, including reliability, validity, and diagnostic accuracy sufficient for the decisions being made
By design, screening assessments do not tell you everything (i.e., you may need to gather more information to inform intervention). An example of a screening assessment for reading that meets these criteria is Acadience® Reading K–6.
The purpose of diagnostic assessments is to help identify specific instructional targets (i.e., skills to teach) and verify levels of support needed. In the ODM framework, diagnostic assessment is about determining the best approach to instruction. As such, these assessments are designed to answer the following questions:
- What skills and needs should be targets of intervention and change(s) in instruction?
- What is our plan to address needs?
- What is our goal or what are our goals for the student and/or system?
- What is the plan for progress monitoring (e.g., materials to use, frequency, etc.)?
Diagnostic assessments are more in depth than screening assessments. Because they are more resource intensive (i.e., take more time), they are administered to "some" but not all students. These assessments are typically administered after universal screening has identified an area of potential intervention, but more information is needed to better target intervention.
Diagnostic assessments should have these qualities:
- Time efficient for purpose
- Sufficiently comprehensive to provide specific information for differentiating instruction
- Aligned to the essential skills to be mastered for the domain and grade of interest
- User friendly and adaptable across instructional settings
- Strong technical adequacy, including reliability and validity for the decisions being made
Example diagnostic assessments for reading that meet these criteria are Acadience® Reading Survey and Acadience® Reading Diagnostic.
The purpose of these assessments is to quickly and efficiently gauge student performance in key skills targeted for intervention to determine if sufficient progress is being made toward a desired outcome. These assessments are designed to answer the following questions:
- Is the student making progress toward their goals? Are we, as a system, making progress toward our goals?
- Is the support provided effective or do we need to change what we are doing?
- If change is needed, what kind of change?
Progress-monitoring assessments are administered to any student receiving intervention. The frequency of administration is dictated by the level of need and intensity of intervention. For example, students who are receiving supplemental support, such as at Tier 2, might be monitored one or two times per month, while students receiving intensive intervention, such as at Tier 3 would be monitored once per week.
Progress-monitoring assessments should have these qualities:
- Valid and reliable for the decisions to be made
- Sensitive to small changes in performance over time
- Useful for repeated assessment (e.g., alternate forms of equivalent difficulty available)
- Efficient, brief, low-cost indicators of a broader essential skill
- Easily understood
- Provides a feedback loop and safety net for educational decisions
An example progress monitoring assessment for reading that meets these criteria is Acadience® Reading K–6.
The purpose of outcome assessments is to determine if a goal or important outcome has been achieved at a specific point in time (e.g., end of unit, end of grade, etc.). These assessments are designed to answer the following questions:
- In general, is our instruction effective (across all tiers)?
- Has the additional support provided been effective?
- Have goals, at the individual and system level, been met ?
- Are there still students in need of support? If so, how many and who?
- Are students meeting expectations at a broader level (e.g., state criteria)
Some outcome assessments may address the first three questions and are closely tied to our teaching (e.g., end-of-year benchmark assessment). Other outcome assessments may address the fourth question and are intended to measure broader standards (e.g., end-of-year statewide testing). The desirable characteristics of assessments that address the first three questions are the same as those for screening assessments. An example of an assessment for reading that meets these criteria and addresses the first three questions is Acadience® Reading K–6.
A seamless and integrated assessment system that incorporates each of the four assessment types and that measures the most critical early literacy and reading skills is ideal. The critical early literacy and reading skills to assess include: (a) phonemic awareness; (b) phonics and decoding (basic and advanced); (c) vocabulary and oral language; (d) accurate and fluent reading of connected text (i.e., sufficient fluency for comprehension and appropriate prosody); and (e) comprehension. An important benefit of a seamless assessment system is that it results in greater efficiency. And, this is a consequence I think educators can get behind.
Next week, I will be presenting a webinar on this topic during which I will elaborate on what we should look for in our MTSS assessment system to address all students' literacy needs. Hope to see you there!
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). ACE & NCME.Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749. https://doi.org/10.1037/0003-066X.50.9.741