The recent spate of errors surrounding the release of student scores on statewide achievement tests is one of the most publicized examples of the serious deficiencies in California's educational data system. Hopefully, it will inspire corrective action by the governor and Legislature, because we will never be able to judge the quality of reform without better data.
Last week, Stanford 9 test scores were misreported for many students whose fluency in English had improved, leading to wrong conclusions about the progress made by English-language learners.
But the recently published evaluation of California's multibillion-dollar class size reduction program highlights even more serious problems. The purpose of this ongoing study is to see whether class size reduction affects the distribution and qualifications of teachers and, ultimately, whether it increases student achievement. These goals are hampered by gaps in data collection.
So far, the evaluation of the impact of class size reduction has found dramatic declines in the qualifications of the teacher corps and only small gains in student achievement. But what is the relationship between these findings? We know that overcrowded schools in disadvantaged neighborhoods had the toughest time hiring fully credentialed teachers for the additional classrooms spawned by class size reduction. Did the students in these schools test relatively higher or lower depending on their teachers' qualifications? Which teachers were most effective? These questions remain unanswered because the state's collection system provides no way to link teacher data with student data.
Moreover, we need to know the association between student achievement and education policies in general, not just in relation to class size reduction. Because the state's data files do not contain a way to identify students year to year, analysts have no means of determining annual fluctuations in achievement for a given student or group of students. Instead, they must compare scores for a group of students in one year with scores of a different group of students the following year. Indirect comparisons of this sort do not provide a full story. The information is likely to be most inaccurate for disadvantaged schools where turnover rates are high and the need to understand the impact of new policies is greatest.
The consequences of an underdeveloped data system are becoming more worrisome as California moves toward a new, high-stakes school accountability system. The legislation requires that the Department of Education develop an academic performance index for rewarding and intervening in schools. The index is based on Stanford 9 achievement scores, pupil and teacher attendance rates, graduation rates, scores on two tests that have not yet been developed (an applied academic skills test and a high school exit exam) and possibly other performance indicators as well. This sounds sophisticated, but when the system comes on line in 1999-2000, it will be anything but. Why? Because the Stanford 9 scores are likely to be the only indicator available. Furthermore, Stanford 9 data on individual students cannot be linked from one year to the next, meaning that we will be assessing schools on the basis of changes in group scores but ignoring the fact that there may be changes in the groups.
There are historical explanations for limiting the kind of data schools can gather. Some have to do with resources--policymakers have been reluctant to spend money on something as dry as a data system. The lack of student identifiers also derives from a concern about protecting privacy. And the absence of links between teachers and students was prompted by a desire to protect teachers from accountability pressures they consider unfair.
But the combined result of these decisions is an inability to answer critical questions about the effects of California's reforms. Better data would increase our ability to understand whether the state is on the right education course.
Tentative attempts to develop better student information are underway, but they are moving at a glacial pace, and they rely on voluntary participation. We must do better than that. Other states like Texas and Florida have invested in improved information systems. Our state leadership should do the same.
Considering that we spend $38 billion a year on schools, we must invest a fraction of that amount to gather the information we need to judge progress and make improvements. Let's design a statewide K-12 data structure that supports rather than frustrates education reform.