Uncategorized

Value-Added Assessments Treat Teachers Like Cattle

School districts are under pressure from the federal government, foundations, and states to include value-added assessment as a part of a teacher’s evaluation to meet the widely supported policy goal of identifying the most effective and the least effective teachers in a school system. On its face, the argument for value-added models (VAM) seems to make sense. How well a student does after a year with a teacher should serve as an indicator of how effective that teacher was. But by what measures? How valid are those measures? If the student measure is a score on a standardized test, what evidence do we have to indicate the standardized test accurately measures teacher effectiveness? Which students are being compared?

You can’t fatten the cow by weighing it and the same goes for helping improve the quality of teaching.

According to the American Statistical Association, “VAMs attempt to predict the “value” a teacher would add to student achievement growth, as measured by standardized test scores, if each teacher taught comparable students under the same conditions.” VAMs are usually based on standardized test scores, and do not directly measure teacher’s contributions toward other student outcomes such as creativity, curiosity, and persistence. Also, this type of measure is at best a correlation, but is often viewed as causal. In other words, while a trend may be found it doesn’t necessarily mean the trend was due to what the teacher did or did not do. The differences actually may be caused by other factors that are not captured in the model – things like poverty, school climate, student mobility, and school safety.

Another factor to consider is the use of cohort scores. Comparing this year’s 6th grade class to last year’s 6thgrade class is a fairly common method of looking at growth over time. That would be like comparing the earned run average (ERA) of last year’s high school baseball team to that of this year’s team. The pitchers on the team are not necessarily the same. The opponents aren’t the same. There might even be a new head coach. Would this kind of comparison be the best way to evaluate the pitching coach? Should that coach lose his job if the overall ERA of this year’s team dropped a few points or rewarded because an all-star pitcher transferred in?

A 2014 study by Polikoff and Porter found, “no association between multiple-measure teaching effectiveness ratings—which combine value-added measures with survey and observational ratings of teacher quality—and the content of teachers’ instruction in the classroom.” They go on to state, “Given the growing extent to which states are using these measures for a wide array of decisions, our findings are troubling.”

The recently passed Every Student Succeeds Act (ESSA) gives states new freedom in the way they evaluate teachers. The new law does not require states to set up teacher-evaluation systems based in significant part on students’ test scores, a requirement of ESSA’s predecessor, the No Child Left Behind Act. I hope states will step back and look at the research that is emerging on the lack of correlation between VAM and teacher effectiveness. An effective teacher is much more than a test score. 

If you want to fatten the cow, you have to feed it. If you want to improve the quality of teaching you have to provide meaningful support.

  • Howard Pitler, Ed.D. is an author of Classroom Instruction that Works, 2nd ed., Using Technology with Classroom Instruction that Works, and A Handbook for Classroom Instruction that Works, 2nd ed. He has worked with teachers and administrators internationally for over a decade to improve outcomes for kids. He was named a National Distinguished Principal be NAESP and is an Apple Distinguished Educator. He can be reached at hpitler@gmail.com, on Twitter at @hpitler, or on his website, www.hpitler.com.

    View all posts