Advertisement

Diagnosing Biomedical Publications

Share via
Times Medical Writer

They can change the course of medicine, influence the careers of researchers, and determine the for tunes of pharmaceutical manufacturers and their stockholders.

They are the powerful biomedical journals, such as the New England Journal of Medicine and Nature, whose names are often preceded by the word prestigious in the news media.

But how scientific is the process by which biomedical journal editors decide what research articles to publish?

Journal editors conceded at a three-day meeting earlier this month that their editorial procedures are imperfect, frequently fallible, and have received little serious scrutiny.

Advertisement

The procedures, which involve the use of independent researchers to judge the merit of studies by other researchers that have been submitted for publication, are collectively known as the “peer review” system.

Speaker after speaker at the American Medical Assn.-sponsored “International Congress on Peer Review in Biomedical Publication” credited peer review with substantially improving the quality of scientific articles. But they cautioned that the system has many deficiencies.

Peer review is far from being a “perfect sausage machine for grinding out the truth,” said Elizabeth Knoll, of the University of California Press in Los Angeles. Knoll, who previously worked for the Journal of the American Medical Assn., added: “Just because peer review is about a review of scientific data doesn’t mean that it is itself a scientific process. . . . We need more realistic, human expectations.”

Advertisement

Dr. Arnold S. Relman, the New England Journal’s editor-in-chief, said the peer review process is “not meant to determine ultimate truth or falsity.” Relman said its main functions were “to improve the quality of what is published” and “to help editors decide what they want to publish in their journals, knowing that what they don’t choose to publish in their journal will be published in somebody else’s journal.”

The deficiencies cited by editors and other speakers included a bias toward publishing studies with positive results, a bias against publishing truly innovative papers, and an editorial process that is shrouded in excessive secrecy.

The “selective suppression of ‘negative results’ may lead to the adoption of ineffective or hazardous treatments,” said Dr. Iain Chalmers of Oxford University in England.

Advertisement

To guard against this bias, Chalmers suggested that all clinical trials should be listed in central registries at inception and their results eventually published or otherwise made public. “Failure to provide a full report of adequately designed research is a form of scientific misconduct which does an injustice to all those who have participated in the research,” he said.

Editors also criticized themselves for failing to catch many deficiencies in manuscripts that should have been corrected, and maintained they were unable to detect many instances of scientific misconduct, ranging from duplication publication to shoddy research practices and outright fabrication and fraud.

Dr. Joseph M. Garfunkel, editor of the Journal of Pediatrics, described a study of 25 randomly selected manuscripts that had already been accepted for publication. After receiving two additional peer reviews, journal editors said they would have rejected up to 20% of the accepted papers and required additional revisions in almost all of them.

The leading biomedical journals--the New England Journal, the Journal of the American Medical Assn., Science, the Annals of Internal Medicine, the Lancet, Nature, and the British Medical Journal--each receive thousands of unsolicited manuscripts each year. They publish only 10% to 20% of them.

Journal editors say their use of peer review distinguishes their publications from newspapers and magazines, as well as professional journals that do not use outside reviewers.

But their editorial procedures vary widely. Some obtain one to two reviews from independent scientists, others five to 10 or more.

Advertisement

Most journals send only some submitted reports out for review; many manuscripts are rejected outright. On the other hand, some publish articles that have not been scrutinized by peer review, ranging from original articles to letters and editorials.

The journals do not pay their authors, but the costs of preparing articles are usually subsidized by university salaries, research grants, or pharmaceutical manufacturers. The British Medical Journal and the Lancet pay their reviewers about $40 an article, although most journals do not make such payments.

The beginnings of peer review are often traced to 1752, when the Royal Society of London took over responsibility for publication of the Philosophical Transactions, a journal of that time, according to David A. Kronick, a medical librarian at the University of Texas Health Science Center at San Antonio.

Early peer review took many forms, Kronick said. Some European scientific societies offered premiums for “prize essays” that were judged by outside reviewers. Others based publication decisions on a majority vote of their members. Instead of publishing on a regular basis, some journals waited for enough quality papers to accumulate to justify a new issue.

Modern biomedical journals emerged in the 19th Century. Editorial peer reviewing was pioneered by the British Medical Journal and later the Lancet, according to John C. Burnham, a historian at Ohio State University. But it was not generally practiced in English-speaking countries until some time after World War II. The widespread adoption of peer review followed the explosion of medical knowledge, the development of medical specialties, and a resultant sharp increase in the number of papers submitted for publication.

At the conference, many speakers pointed out that medical journals are increasingly viewed as sources of credible, factual information. While editors often maintain that the decision to publish an article does not represent an endorsement of its conclusions, they “cannot avoid lending their prestige to the work they publish,” said Knoll, of the University of California Press.

Advertisement

As a result, the journal review process, like the review of research grants and other situations in which researchers judge the merit of each others’ work, has a special obligation to uphold high ethical and scientific standards, said Dr. Peter Budetti, a pediatrician who works for the health and environment subcommittee of the U.S. House Energy and Commerce Committee.

One recent criticism of medical journals is that they try to maintain excessive control over the dissemination of research findings by adhering in varying degrees to the so-called “Ingelfinger Rule.”

The rule, named after Dr. Franz Ingelfinger, a former editor of the New England Journal, ordinarily prohibits publication of research results that have already received extensive publicity.

These medical journal policies are not meant to impede researchers or government officials from publicizing new data with immediate public health or clinical significance, according to the New England Journal’s Relman and others. Nor are they meant to keep researchers from presenting their findings at scientific meetings or to interfere with press coverage of such meetings.

Still some researchers, many of whom are supported by public funds, routinely cite the rule as a reason not to discuss their findings with reporters. Relman said these views were “based on ignorance and a flat misunderstanding of our stated policy.”

Dr. Stephen P. Lock said medical journal editors needed to respond to misunderstandings and unrealistic expectations of their editorial procedures. Lock is the editor of the British Medical Journal and the author of a 1985 book on peer review called “A Difficult Balance.”

Advertisement

“I believe editors need to go much more public than they have,” Lock said. He suggested that journals publish full explanations of their review procedures, including “what they mean by peer review” and “which parts of the journal are subject to this and which are not.”

Lock also said journals should publish an annual audit of their acceptance rates for articles on various subjects and of the time that elapses between the submission of articles and the decision to accept or reject them. Medical journals are frequently criticized for taking a year or more to reach publication decisions.

“If we don’t put our own house in order, then those chaps in Congress or the House of Commons are going to do it for us,” Lock said.

Peer review is far from being a ‘perfect sausage machine for grinding out the truth.’

Advertisement