More on the ‘value-added’ method

This is the second in a Times series examining the effectiveness of teachers and schools in the Los Angeles Unified School District using a statistical analysis of students’ California Standards Test scores in math and English. It covers the academic years 2002-03 through 2008-09.

What is “value-added” analysis?

“Value-added” looks at each student’s past test performance and uses it to project his or her future performance. The difference between the child’s actual and projected results is the estimated “value” that the teacher or school added (or subtracted) during the year.

How does a school’s value-added score differ from a teacher’s value-added score?


A teacher’s score is generated from the test results of his or her students year after year; a school’s score reflects the average of the results of all students during the period analyzed.

Is a school’s score affected by low-achieving students, English-language learners or other students with challenges?

By comparing each child’s results with his or her past performance, value-added largely controls for such differences, leveling the playing field between schools with more high-achieving students and those with more low.

How does value-added differ from the state’s Academic Performance Index, the prevailing gauge used to assess schools in California?


Value-added measures individual students’ year-to-year growth. The API compares the achievement level of one year’s students with those in the same grade the previous year. Experts say both are important and tell parents different things about a school.

Is the value-added method fair to schools with high-achieving students? Don’t their students run out of room to grow?

The method is generally fair to such schools. In fact, the analysis found that high-API schools were somewhat more likely to also see high growth in test scores.

Should parents select a school based solely on its value-added score?


Value-added measures only whether students improved on math and English standardized tests. Parents will probably want to consider a wide variety of other factors, including the API, course and extracurricular offerings and their own impressions of the teachers and campus.

What are the limitations of value-added?

Like the API, it is based on standardized tests, which many teachers and others consider a flawed and narrow measurement of learning. Also, the tests are not given to students in kindergarten or first grade. Scholars continue to debate the reliability of various statistical models used for estimating value-added. Even so, many educators and experts consider value-added an important tool for gauging the effectiveness of teachers and schools.

When can readers view the value-added scores of teachers and schools?


The Times will publish a database this month with the scores of 450 elementary schools and more than 6,000 teachers.

Who did the analysis for The Times?

The Times hired Richard Buddin, a senior economist and education researcher at Rand. Corp., to work with its data analysis team. Rand was not involved in Buddin’s work. Several outside experts reviewed and critiqued it as well. A $15,000 grant from the Hechinger Report, an independent nonprofit education news organization at Teachers College, Columbia University, helped fund the work. The institute did not participate in the analysis.

Where can I learn more?


The Times has posted more information about the method and these articles at

You can see Buddin’s technical paper at: