New SAT: Write Long, Badly and Prosper
When the administrators of the SAT announced that their new test would include a 25-minute essay portion, writing teachers around the country were optimistic. We hoped it would be a genuine test of writing ability, and that over time it would increase the emphasis on good writing in high schools and lead to better-prepared, more-literate students being sent off to college.
Unfortunately, that no longer seems likely. Instead, the SAT essay has turned out to be a completely artificial exercise that appears to reward students for writing badly.
First, the test encourages wordiness. Longer essays consistently score higher. Shortly after the test was first administered in March, I looked at scored samples that were made public, including the set used to train graders. I discovered that I could guess an essay’s prescribed score just by looking at its length -- even from across a room. One verbose sample that received a perfect 6 concluded with the ridiculous sentence: “If secrecy were eradicated, many problems, such as internal division, but also possibly hate, might also be eliminated.”
Just as disconcerting is the test’s disregard for factual accuracy. The official guide for scorers states: “Writers may make errors in facts or information that do not affect the quality of their essays. For example, a writer may state ‘The American Revolution began in 1842' or ‘Anna Karenina, a play by the French author Joseph Conrad, was a very upbeat literary work.’ ” One of the sample papers scoring a “perfect” 6, for example, described the “firing of two shots at Fort Sumter in late 1862,” even though it was in early 1861 and 4,000 shots were fired.
The truth is the whole idea behind the 25-minute essay is wrongheaded. Nowhere except on examinations such as the SAT essay does an individual have to write so quickly on an unfamiliar topic. Indeed, aside from in-class college exams, most college writing assignments involve planning, writing and rigorous revising. Moreover, in-class college exams -- like most papers produced in the workplace -- tend to focus on material the writer knows. Few people receive e-mails from the boss asking for a rapid response to a ludicrously broad question like, “What is your view on the idea that it takes failure to achieve success?” (one of the sample essay prompts).
The problem is exacerbated for students from bilingual backgrounds, who need to revise their writing. Joseph Conrad (who was, in fact, a native Polish speaker, not a Frenchman) spent long hours editing drafts of his novels. Although he became a master of English prose, he would have probably received a lousy score on the SAT essay.
Unfortunately, many students enter college believing that the sloppy writing that got them there is the type of writing that colleges want. College teachers often spend the first year “deprogramming” students from writing formulaic “five-paragraph essays,” thinking that a first draft is a final draft, believing form is more important than content, and equating quantity with quality. The SAT essay will only encourage that kind of thinking.
The College Board itself admits that its essay portion is an unreliable measure of writing skills. That is why it counts for only 25% of the SAT’s writing section; the principal component, 75% of the writing score, still consists of multiple-choice items (which the College Board’s own research shows correlate highly with parental income).
How can the current situation be addressed?
First, we must acknowledge that high-stakes college admission testing is too important to leave entirely to the private testing agencies. Colleges and universities are ready, willing and more than able to take the lead. College and high school writing teachers, along with college admission officers, should control the design, content and grading of the writing test even if its administration is left to outsiders.
Second, the test should consist of two substantial essays written over the course of a day. The National Commission on Writing has stated that one writing sample is insufficient to measure a student’s writing and that students need time to plan, revise and edit.
Third, rather than having isolated individuals grade the tests on the Web, testing agencies could use the Internet to create regional grading centers where college and high school faculty would evaluate the papers cooperatively in weekend scoring sessions. This would be more reliable than solitary grading.
Among other benefits, this more comprehensive writing test would discourage coaching, because the only effective “coaching” would be to teach students how to write.