The Cutting Edge: COMPUTING / TECHNOLOGY / INNOVATION : Readers Say What Is, Isn’t Wrong With Peer Review

Even in science, a flawed system that is the “best process we’ve got” can turn out to be the only game in town.

Readers responding to my Oct. 25 column cite many problems with the peer review system that determines who gets published in professional journals and who gets funding for their work, but none recommended abolishing it. Federal funding agencies, including the National Science Foundation, are re-evaluating the system on the heels of a warning from the General Accounting Office that steps need to be taken to “ensure fairness.”

Critics contend that peer review results in an enormous waste of time and that it is subject to the biases of reviewers whose identities remain a secret.

But most readers said the problem is not with the system, but with the people involved in it.


“The problem boils down to a failure of the reviewers to review objectively,” writes aerospace engineer William E. Haynes. “That is a problem that no revision in method will cure. The problem is exactly the same as the cause of most of our problems in the world: a deterioration in ethics, morals and commitment to excellence.”

“The system is a very mixed bag,” writes astronomer Andrew T. Young of San Diego State University. “Some things it does very well: sorting out fairly routine projects that are sort of ‘bread and butter’ science, the things that obviously need to get done to fill in the blanks. But some things it does very poorly.

“There are projects that need to be much longer than the typical National Science Foundation grant,” Young continues. “Unless these can produce publishable results in a few years, they won’t get started in the present system. There are good ideas that simply aren’t obvious to anyone but their inventors, and these get voted down routinely.

“In the current climate of tight money, anything the least bit speculative doesn’t have a chance, and that means that anything really original has to be bootlegged on funds from some normal-looking project or it won’t get done.”

Young advocates a change in the system that would allow a small percentage of grants to go to off-the-wall proposals submitted by scientists with established track records.

James D. Barrie, research scientist with the Aerospace Corp., agrees that the system “has flaws,” but he rejects a proposal from Pennsylvania State University’s Rustum Roy to base funding and publication on the past performance of the scientist submitting the proposal rather than on peer review.

“Roy’s solution is no solution,” Barrie writes. “He merely codifies a ‘good old boy’ system in which if you are ‘in,’ then you can publish whatever you want, and to get ‘in,’ you must suck up to someone who is.”

Roy has launched his own professional journal that is open to scientists who have published at least 50 peer-reviewed articles in other journals. Young researchers can publish in the journal if they are sponsored by a senior scientist.

“The real problem is that people like Roy [and many funding agencies] place a high degree of stock in ‘bean counting,’ and there is tremendous pressure upon scientists to inflate their publication counts rather than to concentrate on producing papers of high quality,” Barrie argues.

“My feeling is that the overall tolerance of mediocrity needs to be reduced . . . so that only quality work is put into print. Perhaps what this says is that we need a better process of selecting and rating reviewers, so that only our best scientists take part in the process, and they are not put in positions in which a conflict of interest from a single reviewer is enough to delay or prohibit publication of a paper.”

Reflecting the comments of others, Barrie notes that: “Just like democracy, the process has flaws, but it is the best process we’ve got.”

Some readers said reviewers had helped them in their work by identifying weaknesses in their papers or proposals. But most said they found the secret peer review process unsettling and wasteful, even if it is the best system available.

“One of the scariest conversations I ever had with a journal editor occurred 20 or so years ago,” writes Young. “My wife and I had been surveying journals to see how many referees they used, and what their policies were.

“I happened to find myself in a workshop with the editor of a very prestigious journal, so I asked him how many referees he used.

“ ‘One,’ was his reply. ‘We tried using two for a while, but we found they usually disagreed, so we went back to using one.’ ”


Lee Dye can be reached by e-mail at