Advertisement

Implanted cardiac device testing is scant, study finds

Share

Millions of Americans receive implanted cardiovascular devices such as pacemakers and stents, but many of the devices are not subjected to rigorous safety and effectiveness research before being approved for use, according to a study released Tuesday.

It’s common for such devices to receive Food and Drug Administration approval based on information from only a single study, which “raises questions about the quality of data on which some cardiovascular device approvals are based,” said the authors, from UC San Francisco.

Such scant data do not constitute the kind of high-quality evidence for safety and efficacy that doctors and consumers expect, the authors said, although they stressed that their findings didn’t mean cardiovascular devices were universally unsafe. The study was published in today’s Journal of the American Medical Assn.

Advertisement

In recent years, the FDA has subjected drugs to tougher scrutiny before deciding whether to approve them. But far less attention has been given to medical devices, said Dr. Sanket Dhruva, a coauthor of the paper and a medical resident at UC San Francisco.

“In general, there is very little data on the strength of evidence for device approval, whereas there is a lot more on pharmaceuticals,” Dhruva said. “We didn’t expect that all the devices would need multiple studies or randomized studies. But we were surprised that so many devices were approved on the basis of a single study.”

The researchers reviewed the pre-market approval paperwork for 78 high-risk cardiovascular devices that received the FDA’s blessing from January 2000 through December 2007. These documents are intended to provide an objective critique of the scientific evidence for the devices. The FDA categorizes medical devices by risk, with Class I defined as low-risk (such as a hand-held surgical instrument) and Class III signaling the greatest risk because the devices are typically life-sustaining.

The study found that 65% of the pre-market approval applications for high-risk devices were supported by a single study. Some studies also failed to provide such details as the number of participants enrolled. Only 27% of the studies were randomized and only 14% were blinded. Randomized, controlled, blinded clinical trials are considered the gold standard for scientific research; in such studies, participants are chosen at random to receive a specific treatment, with similar participants providing a comparison and not even the researchers informed of which group is which.

Medical device studies, however, are much more difficult to carry out and cannot be compared with drug studies, said Mark Leahey, president and chief executive of the Medical Device Manufacturers Assn. For example, a drug under study can be compared with a sugar pill, but there is no such placebo with which to compare an implanted device.

“The authors are trying to draw a comparison to drugs and devices without appreciating the inherent differences and the limitations involved with designing trials,” Leahey said. “We’re not suggesting devices should not be put through a rigorous process. But it’s a false conclusion to suggest that the only way to get data to protect patients is through randomized, blinded trials.”

Advertisement

But other evidence on device quality was also lacking in the pre-market applications, the study found. In a third of the applications, no studies were conducted in the United States, even though the FDA asks for proof of applicability to U.S. patients. Researchers also found discrepancies in some studies, such as differences in the number of people enrolled and the number analyzed.

The FDA, which did not return a request for comment, began reviewing medical devices in 1976. In recent years, the number of devices has soared. In 2008, an estimated 350,000 people received pacemakers, 140,000 received implantable cardioverter defibrillators and 1.2 million received stents, the study said.

“I think most cardiologists and cardio-thoracic surgeons would be surprised to learn of the strength of evidence of the devices that they are using and would have expected more randomized and blinded studies with use of active controls, clear endpoints and longer follow-up,” the study’s lead author, Dr. Rita Redberg, said in an e-mail.

shari.roan@latimes.com

Advertisement