Advertisement

Software Woes May Stem From Planning Failures

Share
Associated Press

New software at Hewlett-Packard Co. was supposed to get orders processed faster at the computer giant. Instead, a botched deployment cut into earnings in a big way in August and executives were fired.

Last month, a system that controls communications between airliners and controllers in Southern California shut down because some maintenance had not been performed. A backup also failed, triggering potential peril.

Other recent computer code foul-ups delayed financial aid to university students in Indiana and caused retailer Ross Stores Inc.’s profit to plummet 40% after a merchandise-tracking system failed.

Advertisement

Such problems are often blamed on bad software, but the cause is rarely bad programming. “In 90% of the cases, it’s because the implementer did a bad job, training was bad, the whole project was poorly done,” said Joshua Greenbaum, principal analyst at Enterprise Applications Consulting in Berkeley. “At which point, you have a real ‘garbage in, garbage out’ problem.”

As governments, businesses and other organizations become more reliant on technology, the consequences of software failures are rarely trivial. Entire businesses -- and even lives -- are at stake.

“The limit we’re hitting is the human limit, not the limit of software,” Greenbaum said. “Technology has gotten ahead of our organizational and command capabilities in many cases. It’s amazing when you go into companies and see the kinds of battles that go on.”

Often, the first step leading to a failure is taken before the first line of computer code is drawn up. Organizations must map out exactly how they do business, refining procedures along the way. All this must be clearly explained to a project’s technical team.

“The risk associated with these projects is not around software but is around the actual business process redesign that takes place,” said Bill Wohl, a spokesman for software giant SAP. “These projects require very strong executive leadership, very talented consulting resources and a very focused effort if the project is to be successful.”

A 2002 study commissioned by the National Institute of Standards and Technology found software bugs cost the U.S. economy about $59.5 billion annually. The same study found that more than a third of that cost -- about $22.2 billion -- could be eliminated by improving testing.

Advertisement

A lack of strong leadership appears to have been a factor in HP’s problem, which led to the dismissal of three top executives in its server and storage business, hours after the company announced disappointing earnings Aug. 12.

HP did not return a telephone call seeking comment but has said that its problems have been resolved. Wohl said the software, made by SAP, was not at fault.

Big projects also can sour during development, particularly when insufficient resources are allocated, the people who will have a stake in the new system don’t participate in planning and executives don’t care. All that can lead to miscommunication with the software.

“Mistakes hurt, but misunderstandings kill,” said John Michelsen, chief executive of ITKO Inc., which makes software that helps companies manage big software projects and tests them automatically.

Michelsen said his Dallas-based company’s Lisa software attempted to reduce the complexity of testing, so nontechnical executives in charge of major software projects could ensure the actual code adhered to their vision.

The lack of robust testing during and after such a project probably contributed to the Sept. 14 air traffic control radio system outage over parts of California, Nevada and Arizona. All 403 planes in the air during the incident managed to land safely, said FAA spokesman Donn Walker.

Advertisement

The genesis of the problem was the transition in 2001 by Harris Corp. of the Federal Aviation Administration’s Voice Switching Control System from Unix-based servers to Microsoft Corp.’s off-the-shelf Windows Advanced Server 2000.

By most accounts, the move went well except the new system required regular maintenance to prevent data overload. When that wasn’t done, it turned itself off as it was designed to do. But the backup also failed.

Michelsen blamed the failure on inadequate testing: “On a regular basis, the FAA should have been downing that primary system and watching that backup system come up. If it doesn’t go up and stay up, they would have known they had a problem to fix long before they needed to rely on it.”

Advertisement