Advertisement

CAP Scores Can’t Tell You How One Student Did : Education: Experts say parents often misinterpret the data to conclude that their child is doing well. It isn’t necessarily so, they say.

Share
TIMES STAFF WRITER

Like the well-worn cliche that says you can’t tell a book by its cover, you also can’t tell a student by his or her school’s California Assessment Program scores.

Each year, when newspapers around the state publish results of the various CAP tests, parents pore over the figures, seeking their child’s school and comparing it to the school down the block. Naturally, parents feel proud if their child’s school ranks among the top--many parents, educators say, believe that if they send their kids to a high-scoring school, it automatically follows that they’ll get a top-flight education.

But that’s not necessarily so.

Educators around Orange County said that every year about this time, uncounted thousands of parents misinterpret the CAP scores--in effect, they draw conclusions that aren’t necessarily supported by the facts.

Advertisement

Under the so-called “matrix sample” on which CAP scores are based, the educators point out, CAP results are designed merely to paint a broad picture of an overall district’s performance, not the aptitude of individual students. Still, many parents persist in the belief that good scores mean their children are doing well.

“It’s like looking at a pie,” said William Eller, assistant superintendent for instructional operations at the Capistrano Unified School District. “Under the matrix sample, each student brings an ingredient to the pie, (but) you just look at the pie and say, ‘This looks good.’ I can’t look into the pie to find out if what the student put into it was good or bad.”

Another pitfall often found in interpreting CAP scores is the tendency to rate dissimilar districts by direct comparison. Students in affluent districts, for example, often have better books and other educational tools, and better access to libraries or other quiet places to study. Test scores tend to rise as a result of those abstract factors.

The CAP test takes those socioeconomic factors into account by assigning each school a rating that reflects the educational and professional level of students’ parents. Third- and sixth-grade tests determine socioeconomic status based on parents’ professional level. A rating of 1 denotes parents classified as unskilled workers; a 2 rating indicates semiskilled workers, and a 3 is assigned to parents who are skilled workers.

Parents of students in grades eight through 12 are rated on a five-point scale that denotes educational background. A 1 indicates parents who did not finish high school, and a 5 is assigned to parents with advanced degrees.

Although the tests are standardized and take socioeconomic factors and language proficiency into account, there are other factors that are not necessarily included, such as classroom conditions or how the test is administered. Those factors can also influence test scores.

Advertisement

“There’s no way you can standardize to the point where every district is administering the test exactly the same way under the same conditions,” said Lois Blackmore, coordinator of data analysis for the Garden Grove Unified School District.

Thus, parents and district administrators would be better served by using CAP data to track performance within the district, rather than as a means of comparing one district to another, Blackmore said.

“I believe it is probably most pertinent for individual districts to look at (CAP scores) from year to year, because only the people working there know the conditions (students) are working under,” Blackmore said. “Each district knows the percentage of students who may be acquiring the English language. . . . Demographic factors (are) taken into consideration as data are aggregated and analyzed, but many times those considerations don’t get published.”

State Supt. of Public Instruction Bill Honig agreed that the way CAP results are published can render the results meaningless, and he said the press often fails to properly analyze the data.

“How (the results) are reported is really crucial,” Honig said. “If you just list the scores, a lot of times it doesn’t show much. You may have a wealthy district (with high scores) that doesn’t educate as well as other wealthy districts.”

Honig also argued that newspapers should also publish results of three or more years to show trends, rather than just yearly results, in which “there are always normal statistical fluctuations,” he said.

Advertisement
Advertisement