Advertisement

Study backs ‘value-added’ analysis of teacher effectiveness

Share

By Jason Felch

Teachers’ effectiveness can be reliably estimated by gauging their students’ progress on standardized tests, according to the preliminary findings of a large-scale study released Friday by leading education researchers.

The study, funded by the Bill and Melinda Gates Foundation, provides some of the strongest evidence to date of the validity of “value-added” analysis, whose accuracy has been hotly contested by teachers unions and some education experts who question the use of test scores to evaluate teachers.


FOR THE RECORD:
Teacher scores: A Dec. 11 article in the LATExtra section reported that a preliminary study by education experts had found that teachers whose students said they “taught to the test” scored lower than average on value-added analysis. In fact, the study found that test preparation was positively correlated with a teacher’s value-added scores, but not as strongly as other indicators, such as effective classroom management or efficient use of class time. —


The approach estimates a teacher’s effectiveness by comparing his or her students’ performance on standardized tests to their performance in previous years. It has been adopted around the country in cities including New York; Washington, D.C.; Houston; and soon, if local officials have their way, Los Angeles.

The $45-million Measures of Effective Teaching study is a groundbreaking effort to identify reliable gauges of teacher performance through an intensive look at 3,000 teachers in cities throughout the country. Ultimately, it will examine multiple approaches, including using sophisticated observation tools and teachers’ assessments of their own performance

Advertisement

READ THE DOCUMENT: The full text of the Gates study, as annotated by Times Staff Writer Jason Felch

The results have been eagerly awaited by education officials and come amid a national effort to reinvent teacher evaluations, which for decades have been based on occasional, cursory observations by principals who give passing grades to the vast majority of teachers. “Value-added” is thought to bring objectivity to the process and, because it compares students to themselves over time, largely controls for influences outside teachers’ control, such as poverty and parental involvement.

According to the study, student gains on standardized tests reflected meaningful learning and critical thinking skills, not just test preparation or memorization, a frequent concern of critics of the value-added approach.

But some experts and teachers union officials believe the findings are premature.

“The hope of [the project] was to figure out the kind of instructional practices that would help improve student achievement over time, but this preliminary report does not do that,” said American Federation of Teachers President Randi Weingarten, who has collaborated closely with the researchers. “We’re disappointed that it was rushed out when even the authors admit it is incomplete.”

The preliminary report focuses on two measures of teacher performance: value-added analysis and student surveys.

Both tend to identify the same teachers as either effective or ineffective, and the findings held up when teachers taught different classes of students, the study found.

Advertisement

The study found that feedback from students as young as fourth graders, especially about a teacher’s ability to manage a classroom and challenge students, was useful in evaluating teachers.

Because value-added measures were so reliable at predicting teachers’ future performance, the researchers urged school districts to use it as a “benchmark” for studying the effect of other measures.

“The evidence on student achievement gains is like a giant divining rod,” said Thomas Kane, a professor of education at Harvard University and director of the research project, in an interview. “It says, dig here if you want to learn what great teaching looks like.”

A growing body of evidence suggests that there are dramatic differences in teacher effectiveness not reflected in the subjective evaluations now in place.

The Times began publishing articles in August using value-added analysis to estimate the effectiveness of thousands of district teachers in raising test scores. Drawing on data the district had largely ignored, The Times found sometimes huge variation among teachers with similar students. The reports fueled an intensive debate nationally over how teachers should be evaluated and whether the results should be made public.

Teachers unions and some education experts have argued that value-added is an unreliable measure that encourages rote learning and “teaching to the test.”

Advertisement

But the study found that teachers whose students said they “taught to the test” were, on average, lower performers on value-added measures than their peers, not higher.

Also, value-added was found to be a reliable predictor of students’ future performance on different tests that measure higher-level concepts, particularly in math. “Teachers who are producing gains on the state tests are generally also promoting deeper conceptual understanding among their students,” the researchers found.

The researchers, including experts from Stanford University, Dartmouth College, Rand Corp. and the Educational Testing Service, acknowledged limits to the value-added approach: Scores are not available for all teachers, estimates are often volatile from one year to the next and the results don’t provide teachers with feedback on how to improve. Value-added should be used with other performance measures, the authors said.

Jesse Rothstein, a professor of economics at UC Berkeley who has been critical of the value-added approach, says the preliminary results didn’t answer some of his key concerns, such as how results are affected by the way students are assigned to teachers.

“The good stuff isn’t done yet,” he said.

In the study’s second year, teachers will be randomly assigned to new classrooms, an effort to eliminate potential bias caused by how students are placed with teachers. Results from those analyses will be released next spring, with a final report expected in 2012.

jason.felch@latimes.com

Advertisement