Advertisement

Indifference Opened Door to Computer Virus

Share
Times Science Writer

Last week’s disruption of two interlinked government-operated computer networks by a computer virus could have been minimized or perhaps avoided if systems managers had simply implemented earlier instructions for fixing known defects in their systems, according to experts in government and academia.

Graduate student Robert T. Morris Jr. used the known defects in the Unix operating system that controls individual computers in the computer networks to insert a virus that shut the networks down for three days.

Instructions for fixing the defects had previously been circulated, but systems managers, many of them graduate students themselves, had not taken the time to implement the fix. It was not until Tuesday--nearly a week after the virus was inserted--that the final traces of the virus were removed from one computer system.

Advertisement

Fixes Went Unmade

Similarly, in the summer of 1987, hackers from the Chaos Computer Club in West Germany broke into the National Aeronautics and Space Administration’s worldwide computer network by exploiting a Unix defect in computers in the network. Digital Equipment Corp. had sent Unix users directions on how to fix the defect four months earlier, but the fixes were not made.

The episodes illustrate the indifference with which most computer users treat the issue of security, a problem that experts say practically guarantees future incidents.

“The normal reaction is to patch the holes used by the (virus) and then to stick your head in the sand and go back to thinking everything is OK,” said computer scientist Peter G. Neumann of SRI International in Menlo Park, Calif.

“I wonder if a year from now, facilities other than the particular ones that got hit will have looked at their security any more critically,” said Douglas McIlroy of AT&T; Bell Laboratories in Murray Hill, N.J. “Frankly, I doubt it.”

Experts also note that hackers like Morris represent less of a risk to many computer systems, particularly those operated by banks and other businesses, than do authorized users.

Insider Manipulation

Last year, for example, Burkhard Junger, an employee of the giant auto firm Volkswagen AG, admitted manipulating currency exchange contracts on a company computer system to embezzle $270 million. More recently, Gayle Schreir, an employee of American Airlines, was indicted for allegedly using the airline’s computer to give herself and her father credit for 50 million miles in American’s frequent flyer program.

Advertisement

All of these incidents could have been minimized or avoided if systems managers had implemented appropriate security controls and made sure known defects were corrected, experts emphasized.

“Security problems are a management issue rather than a technology issue,” said Andy Russell, a spokesman for IBM Corp. in Armonk, N.Y.

Nonetheless, most experts agree that no computer system is completely safe from intrusion. Morris, for example, exploited one defect that went undiscovered by systems managers until after the virus was inserted. Though he apparently used the defect to enter some computers, he would not have been able to enter the majority of computers had the known defects been fixed, according to experts.

Pointing Up Flaws

In fact, according to accounts by his friends, Morris’ purpose was to point out the vulnerability of the system because the known defects had not been repaired. For that reason, experts suggest, he probably would not have tried to produce the virus in the first place had those well-known defects been fixed.

“None of the security systems are intended to defend against the serious attacker,” said University of Cincinnati computer scientist Fred Cohen, who is credited with describing the first virus in 1983.

Although this particular attack could have been largely thwarted, “anybody who wants to get by (many) defenses can do so trivially,” he said.

Advertisement

A computer virus, like its biological counterpart, enters a system stealthily and seizes control of its operation. Its first order of business is to make multiple copies of itself with which to infect other computers. At some later time, the computer may then simply flash a message of some sort to the computer user or, if the hacker is more malevolent, wipe out stored data and destroy programming.

The virus developed by Morris, a 23-year-old first-year graduate student at Cornell University in Ithaca, N.Y., was not designed maliciously, Morris has told his friends. He meant it to simply invade computers and lie dormant. But because of a simple mistake on his part, the virus multiplied so rapidly that it occupied all the computing power of machines it infected, causing their normal operations to slow and ultimately stop.

Assault on PCs

Most previous virus incidents have involved personal computers, which usually have no security at all. The viruses typically attach themselves to all of a user’s programs, then infect other systems when programs are exchanged with friends by means of floppy disks or computer bulletin boards.

But Morris operated on a grander scale. He injected the virus into an information network called INTERNET that connects more than 60,000 computers around the country. One segment of it, called ARPANET, is used by universities and defense contractors for sending messages and exchanging nonclassified data. A second segment, called MILNET, serves the same purpose for military installations.

The virus disabled from a few hundred to as many as 6,000 computers, but no data was lost because of it and no serious damage was done.

“You might consider it a philosophical warning shot across the bow,” said computer scientist Michael Muuss of the Army Ballistics Research Laboratory in Aberdeen, Md.

Advertisement

Morris told friends that he wanted to alert people to defects in the Unix operating system. Unix is a series of programs that tells the computer, among other things, how to handle data, where to store it, how to display it on the screen and how to send it to a printer.

However, people already knew about the defects but failed for a variety of reasons to correct them.

‘I Was Lazy’

“I was lazy, stupid, foolish,” said Cliff Stoll, who did not make the fixes on computers at Harvard University, where he is head of computer systems.

Stoll should have known better because he had played a major role in cracking the 1987 Chaos intrusion.

“I trusted other people on the networks,” he said.

Where the fixes had been made, the infection was substantially mitigated. Only one of 250 computers at the Ballistics Research Laboratory was infected, for example.

But “the job of installing the fixes is a big one,” said AT&T;’s McIlroy. “A lot of people are happy as long as the system is working: (They say) ‘I’m all right. I’ll ignore it.’ There are limited resources in the responsible agencies and far more demand for new (capabilities) than for fixing bugs whose appearance is hypothetical.”

Advertisement

McIlroy also noted that ARPANET, which suffered the most disruption, “was started as an experimental system 20 years ago, and it is still run in largely the same way by graduate students scattered in many locations. It runs about as well as you would expect under those circumstances.”

Muuss also noted that ARPANET is highly publicized because its creators want as many people to have access to it as possible. Researchers do not expect any significant change in openness of the system because it is designed for maximum accessibility by users.

If managers are to avoid any future virus insertions, they will have to be more diligent about making fixes immediately after defects are discovered, experts agree.

Many business system operators, in contrast, believe that their computer systems are insulated from harm because they are not as readily accessible and are little known--a concept somewhat derisively called “security through obscurity.”

Still Vulnerable

Because of their false sense of security, many businesses have not installed adequate security systems on their computers, and even those companies, like telephone companies who have been prudent, are still vulnerable.

“We’ve seen people break into the telephone systems, and they could do it again,” Neumann said.

Advertisement

The black boxes widely used in the past to circumvent computerized billing systems are minor compared to what could be done, he said. The system could be highly disrupted or completely shut down, he added.

Despite the failure of businesses to install security systems, most experts agreed, hackers have apparently ignored business systems--although it is possible that companies have hidden such incursions to avoid negative publicity.

“Would you deal with a bank that had been penetrated by a hacker?” asked one expert.

“The majority of business data security problems . . . involve errors, omissions or outright fraud by users with authorized access,” IBM’s Russell said.

IBM, for instance, has very tight security on its computer systems. But last Christmas, it was infiltrated by a virus that left Christmas greetings at most terminals. The source was ultimately identified as an authorized user tied into the IBM system through the European Academic Research Network.

Experts cite a litany of management problems that render systems susceptible to internal or external hackers. Some are as simple as workers leaving terminals activated while they are away from their desk for lunch or other errands. Many others involve passwords, secret words that identify a user.

Many users have simply taped their code word to their terminals or desks, although that practice is declining. Others choose code words that are too obvious. One expert cited studies indicating that a hacker could learn as many as three-quarters of a system’s passwords if he knew the names of the users’ dogs and wives and the streets they lived on.

Advertisement

Business systems managers are equally guilty, Russell said. As many as half of all systems managers use passwords supplied with the computers when they were purchased, he said. Such passwords are widely known.

“You can have the most expensive technology available, but it doesn’t do any good if managers aren’t aware of the problems,” he said.

Experts disagree about the security of military computer systems that, unlike ARPANET and MILNET, handle classified information. The government operates three such systems, each of them independent. Programming may not enter the systems electronically or on magnetic tapes or disks. It must be entered manually through a keyboard so that no viruses can tag along. The computers are also not connected to public telephone systems. Military experts say they are impenetrable.

“But the military has to be connected worldwide, which makes it susceptible to penetration,” McIlroy said. “People should remember ‘War Games,’ ” a movie in which a brilliant teen-ager nearly started World War III by tapping into a military computer.

Chance Is Always There

But for most users, who do not want to be as isolated as the military, future incidents seem unavoidable.

“The bottom line is (penetrations of networks) surely will happen again,” said Harvard’s Stoll. “You can’t plug all the holes. If you lock all the doors and windows, some SOB climbs down the chimney.”

Advertisement

“If you want to talk to the rest of the world,” added McIlroy, “there’s always the chance that you’ve done something slightly wrong. The chance is there.”

Advertisement