Advertisement

Separate study confirms many Los Angeles Times findings on teacher effectiveness

Students rush between class periods at Fairfax High School.
Students rush between class periods at Fairfax High School.
(Don Bartletti / Los Angeles Times)
Share via

A study to be released Monday confirms the broad conclusions of a Times’ analysis of teacher effectiveness in the Los Angeles Unified School District while raising concerns about the precision of the ratings.


FOR THE RECORD:
Teacher effectiveness: An article in the Feb. 7 LATExtra section about a University of Colorado study of L.A. Unified School District teacher effectiveness said researchers found that up to 9% of math teachers and 12% of English teachers ended up in different categories than those in a separate analysis by The Times. These percentages referred only to teachers whom The Times rated as more effective than average but whom the Colorado researchers found to be indistinguishable from average. The story should have included an additional 7% of math teachers and 10% of English teachers whom The Times rated as less effective than average but, according to the Colorado study, were indistinguishable from average.


Two education researchers at the University of Colorado at Boulder obtained the same seven years of data that The Times used in its analysis of teacher effectiveness, the basis for a series of stories and a database released in August giving rankings of about 6,000 elementary teachers, identified by name. The Times classified teachers into five equal groups, ranging from “least effective” to “most effective.”

After re-analyzing the data using a somewhat different method, the Colorado researchers reached a similar general conclusion: Elementary school teachers vary widely in their ability to raise student scores on standardized tests, and that variation can be reliably estimated.

But they also said they found evidence of imprecision in the Times analysis that could lead to the misclassification of some teachers, especially among those whose performance was about average for the district.

Advertisement

The authors largely confirmed The Times’ findings for the teachers classified as most and least effective. But the authors also said that slightly more than half of all English teachers they examined could not be reliably distinguished from average. The general approach used by The Times and the Colorado researchers, known as “value added,” yields estimates, not precise measures.

The Colorado analysis was based on a somewhat different pool of students and teachers from the Times analysis, a difference that might have affected some of the conclusions. The Colorado researchers began with the same dataset released to The Times, but their ultimate analysis was based on 93,000 fewer student results and 600 fewer teachers than the analysis conducted for The Times by economist Richard Buddin.

In addition, to improve the reliability of the results it reported, The Times excluded from its ratings teachers who taught 60 students or fewer over the study period. The Colorado study excluded only those teachers who taught 30 students or fewer.

Advertisement

After a Times reporter inquired about that difference, Derek Briggs, the lead researcher on the Colorado study, said in an e-mail that he had recalculated his figures using only those teachers who had taught more than 60 students. Doing so reduced the number of discrepancies, he said; but still, up to 9% of math teachers and 12% of English teachers might have ended up in different categories using Colorado’s method than they did in The Times’ analysis.

The authors also found that the way school administrators assign students to teachers — giving especially challenging students to a certain teacher and not to others, for example — could have skewed the value-added results. But recent research by a Harvard professor using Los Angeles school data did not find that such assignments created a bias in value-added scores.

Buddin said that although most conclusions of the two studies were similar, the differences in data analyzed made it difficult to directly compare his results with those of the Colorado study.

Advertisement

The Colorado study comes as education officials in Los Angeles and across the country are moving to incorporate more objective measures of performance into teacher evaluations. In the process, they are confronting the technical challenges involved in value-added analysis, which attempts to estimate a teacher’s effect on student learning by measuring each student’s year-to-year progress.

Developing value-added scores requires numerous judgment calls about what variables to use and how to obtain the most reliable results. Each school district that has used value-added follows slightly different methods, and supporters of the approach say it should not be used as the sole measure of a teacher’s ability.

Briggs said his goal was to raise awareness about those issues. “You have an obligation to have an open conversation about the strengths and weaknesses” of the methodology, he said.

Briggs’ study was partly funded by the Great Lakes Center for Education Research and Practice, which is run by the heads of several Midwestern teachers unions and supported by the National Education Assn., the largest teachers union in the country.

jason.felch@latimes.com

Times staff writer Jason Song contributed to this report.

Advertisement