How reliable is computer software? As most computer users know, it is very reliable, but as many don’t know, it is never perfectly reliable.
This, it turns out, is no small matter in an age in which lots of things--and lots of potentially dangerous things--are run by computers.
Can we trust them to make life-and-death decisions, such as controlling radiation-therapy machines for cancer patients? (Those machines have malfunctioned, subjecting patients to lethal doses of radiation.)
Software is the set of instructions that tells a computer how to do the task you want it to--word processing or accounting or playing games or shooting down hostile missiles. For complicated tasks--air-traffic control, say, or guiding an antiballistic missile system--there can be millions of lines of instructions.
This complicated software is flawed, somewhere. It has bugs. Occasionally a bug that has managed to escape detection reaches out and bites, as when a programming error brought down half of the nation’s telephone network for nine hours in January, 1990.
Ivars Peterson, who covers computers, mathematics and physics for Science News, has put together a journalistic account of the issues, problems and people in software reliability, a hot topic that is full of all three.
Peterson notes that the issue first sprang to public consciousness in the mid-1980s in reaction to the “Star Wars” Strategic Defense Initiative that the Reagan Administration was then pursuing.
Many computer scientists argued that SDI was impossible: first, because the software to run the system would have to be the longest and most complicated software ever written; second, because it could never be tested under actual conditions (that is, a nuclear attack); and third, because it would have to work perfectly the first (and probably the last) time it was ever used, and no software in history has ever done that.
One of the chapters in Peterson’s book is devoted to that controversy and to David L. Parnas, who resigned from an important Pentagon SDI panel over just this issue.
“No experienced software developer expects a product to work well the first time that it is used,” Parnas said.
His resignation was noteworthy because Parnas had never been a critic of military research. In his resignation letter, he wrote: “Unlike many other critics of the SDI effort, I have not, in the past, objected to defense efforts or defense-sponsored research. I have been deeply involved in such research and have consulted extensively on defense projects. . . . My conclusions are based on characteristics peculiar to this particular effort, not objections to weapons development in general.”
In this way, Peterson effectively personalizes many of his stories, giving them a color and texture that mere recitation of facts and arguments doesn’t always provide.
This book does for software what Tracy Kidder’s “Soul of a New Machine” did for hardware. It gives vivid accounts of the design, building, programming and testing of automated systems, and it recounts the details of what can and did happen when things went wrong.
Testing itself is a major issue for the software industry, which Peterson describes. Since testing can never be perfect, how much testing is enough? How many more “unlikely” bugs are out there, lurking in software, waiting to surprise us and, perhaps, harm us one day?
This is the “Star Wars” question put globally, and it concerns Peterson greatly. Coupled with the general and ubiquitous problem of poor documentation of software (programmers don’t usually write adequate explanations of what they are doing), Peterson writes:
“It’s not too difficult to imagine a society in the not-too-distant future fatally enmeshed in a tragically flawed technological creation that no one really understands. We would face mysterious, all-powerful entities whose whims dictate our daily lives in ways that we cannot fathom.”
That seems a bit overstated, but in general, this is a clear book about an important, interesting and altogether unprecedented human activity that stretches the limits of our minds.