The patient was listed as unstable, in critical condition. On July 30, 2007, between 8 and 9 a.m., he was receiving insulin, anesthetic and blood pressure medication through a volumetric infusion pump when all three of the pump's channels failed.
His blood pressure dropped. Pressure on his brain rose. The pump was replaced, but not in time. The patient was brain dead.
The problem, according to a stark "adverse event report," was software, specifically an overflow in the memory buffer. New software installed by the device's manufacturer the year before "resulted in multiple adverse events over a short period of time," according to the report, which does not name the patient or the treatment facility.
In an industry where software governs everything from digital thermometers to insulin pumps and implanted pacemakers, the risks can be high and crashes can mean injury or death.
Between 1983 and 1997, 1 in 4 medical devices used software. Today, the figure is more than half. Software problems are now the third-leading cause for recalls of medical devices.
The case of the failed infusion pumps illustrates the dangers of faulty software in the medical device industry, one marked by rapid changes in technology.
The death in 2007 was one of more than 700 linked to infusion pumps and one of more than 10,000 complaints received annually between 2005 and 2009. The incident raised awareness of software vulnerabilities in medical devices, and prompted the Food and Drug Administration to recall about 200,000 pumps from one manufacturer and tighten regulation of the products.
But though other safety-critical industries, such as aviation, employ rigorous and continuous software testing, the medical devices industry has been criticized for not adopting standard practices for developing and screening software before products go to market.
In fact, many of the software-based devices — including some insulin pumps, infusion pumps that deliver medicine or food to patients intravenously, and defibrillators used in cardiac arrest — are cleared by the FDA through an accelerated process that involves little or no clinical testing.
The Institute of Medicine recently critiqued the process, known as 510(k), saying it does not sufficiently ensure safety and effectiveness of devices before they are put on the market.
It also recommended that the agency develop procedures to ensure the safety and effectiveness of device software specifically, which requires a different kind of evaluation from hardware, one that critics say the FDA hasn't sufficiently evolved.
"The frameworks for evaluating these devices haven't kept up with the imagination of what we have available today," said Kevin Fu, a professor of computer science at the University of Massachusetts in Amherst. "Imagine if we were evaluating car safety with the concept of a horse and buggy in mind. It wouldn't make sense."
Fu added: "Regulators were caught off guard as to how significant software would be, and it's led to all sorts of problems. It's led to unhappiness by all stakeholders and it's led to adverse events for patients."
Even small errors can have big effects, said Shari Lawrence Pfleeger, director of research at Dartmouth College's Institute for Information Infrastructure Protection. An artificial hip joint with a microscopic difference in size from the standard probably will not pose a threat. But just as a phone number that's off by one digit takes you to a completely different person, Pfleeger said, a one-off error in code can make a device shut off or function improperly at a critical time.
The FDA says that even within the accelerated process, it requires manufacturers to demonstrate that the products are safe and effective before marketing them and that they follow good manufacturing practices.
Over the years, this has substantially increased the average length of time for approval of 510(k) submissions. "As the technology changes, the amount of information we looked at and the way we build our review changes all the time," said John Murray, a software compliance expert in the FDA's medical devices arm.
The regulatory system is also tailored to give more scrutiny to higher-risk devices — the software in a thermometer, for example, does not deserve the same review as that in a pacemaker, the agency says.
But Fu, Pfleeger and others believe implementing testing standards and engineering practices prevalent in other software-reliant industries could diminish some of the risks.
"To say some of these things are standard practice in software engineering is not true," Pfleeger said. "You don't need software engineering training to write software for some of these devices … and it's much easier to write the code for a device than to test it."
Officials acknowledge that the Institute of Medicine report raises some important issues about software. "We want to do the right thing.... We understand that technology is growing," said FDA policy advisor Bakul Patel, adding that the agency awaits feedback from its public meeting on the report Sept. 15.