Advertisement

Big Brother Finds Ally in Once-Wary High Tech

Share
TIMES STAFF WRITERS

Travelers at the airport here, on their way to Palm Springs or Las Vegas, are encountering something new just beyond the security checkpoint. It looks like a tube of King Kong’s lipstick: six feet of brushed aluminum.

As the traveler approaches the device, a loudspeaker tells him to stop and “look forward.” A camera snaps a series of photographs, which are instantly compared with pictures of known terrorists in a computer database. If there’s a match, a siren sounds.

Terrorists in Fresno? As government officials have said, they could be anywhere. Which means they must be sought out everywhere. Airports around the country are preparing to install the facial-screening system being tested at the Fresno Yosemite International Airport.

Advertisement

The facial scans are part of a far-reaching shift in the nature and purpose of American high technology, a change hastened by the Sept. 11 attacks.

For two decades, high tech moved inexorably toward greater convenience and personal empowerment. Hand-held organizers, satellite phones and other digital devices embodied this ideal. They made it possible to get stock updates by cell phone and to shop for groceries via Palm Pilot. The trend was toward a decentralized system of ever-smaller, cheaper and more powerful gadgets.

These days, fear of terror is shifting the emphasis from wired convenience to physical security, from decentralized technology meant to make life easier to centralized surveillance meant to make America safer.

Across the tech world, money and creative energy are flowing to emerging technologies of vigilance, ranging from disposable surveillance cameras to systems that read brain waves for signs of malevolent intent.

Increasingly, the trend is to use technology to search and identify, to mark boundaries and deny access. At airports and office buildings, in supermarkets and stadiums, on computer networks and city streets, it will observe and control--for our own safety.

The elements of this new order go beyond software and servers. Anti-terrorist legislation enacted by Congress after Sept. 11 expands the FBI’s authority to eavesdrop and search e-mail and phone records. In California, Gov. Gray Davis is seeking similar powers.

Advertisement

The new technologies will make such searches faster and more extensive. They also will reduce the typical citizen’s zone of privacy and clash with deep-rooted American values.

“Security and privacy are always in a balance, but since the attacks the equation has changed,” said Jonathan Zittrain, a professor at Harvard Law School and an expert on information privacy.

As in other national crises, from the Great Depression to Pearl Harbor, political, legal and technological change is outpacing discussion of its consequences.

“You don’t want to have a committee meeting when your house is on fire,” Zittrain said.

In the forefront of this shift are two of the most successful and influential chief executives in Silicon Valley: Larry Ellison of Oracle Corp., the second-largest software company, and Scott McNealy of Sun Microsystems Inc., a maker of powerful computers that operate networks.

“Absolute anonymity breeds absolute irresponsibility,” McNealy declared in October. “If you get on a plane, I want to know who you are. If you rent a crop-duster, I want to know who you are.”

Both men are calling for a high-tech national identification card. Oracel and Sun could profit from the development of such an ID, which would be electronically linked to government databases. Still, the executives’ stance is remarkable coming from an industry that traditionally has viewed government oversight with disdain.

Advertisement

Their view also reflects changing public opinion. Polls show that Americans, who used to associate a national identity card with totalitarianism, now strongly favor the idea.

“We as a people are willing to trade a little less privacy for a little more security,” said Stewart Baker, former general counsel to the National Security Agency, the largest U.S. spy agency. “If using more intrusive technology is the only way to prevent horrible crimes, chances are that we’ll decide to use the technology, then adjust our sense of what is private and what is not.”

Scanning the Faces in the Crowd

The Sept. 11 attacks may have lent impetus to the creation of a national identity state, but the groundwork was laid long ago. Video surveillance is so common as to have faded into the fabric of everyday life.

More than 2 million cameras in this country continuously scan street corners and hotel lobbies, grocery stores and schools, subway trains and sports stadiums. Every day, they capture billions of images.

Those images are used mainly to identify criminals after the fact. The next generation of watching technologies is designed to help police intervene in real time.

An embryonic version of this idea is being tested by the Bay Area Rapid Transit commuter-rail system.

Advertisement

Like most public transit agencies, BART came under pressure to increase security after Sept. 11. Each of the 39 BART stations is getting at least 30 high-resolution color cameras that can tilt and pan in a complete circle. They can zoom in on a truck at the end of a large parking lot or the brand logo on a suspect’s shirt pocket. The images will be transmitted instantly via high-speed fiber-optic cable to BART police headquarters.

“Today, dispatchers get calls of crimes in progress, but send in the patrolman blind,” because they can’t see and describe the action themselves, said John Davenport, an undercover officer and one of the system’s designers.

With the new system, dispatchers can train a camera on a mugger in the act, direct officers to the scene with precision and offer detailed descriptions of the suspect.

Davenport said the system helped thwart 15 assaults, robberies and other crimes within a few weeks at the station near Oakland’s sports coliseum, the first to get the new cameras.

BART is spending $2 million on its setup, but the price of a basic surveillance system--a camera, wireless transmitter and connections to a TV screen--is less than $100. Soon, cameras will be cheap enough to post along miles of remote passages, like the BART tube under San Francisco Bay, or on the back of every seat on every commercial airliner.

“George Orwell underestimated our enthusiasm for surveillance,” said Baker, the former NSA counsel. “He correctly predicted that we’d have cameras everywhere. What he failed to imagine is that we’d want them so bad we’d pay for them.”

Advertisement

Civil Libertarians Fear ‘False Positives’

Pictures, unlike words, numbers or even fingerprints, long were considered too subtle and complex to search by computer. Human eyes, it seemed, were needed to tell a mugger from a mother, a terrorist from a tourist.

The facial-recognition system being tested at the Fresno airport represents an important step toward making such judgments instantly and reliably by computer. Airports in Boston, Dallas and Providence, R.I., have begun to install the technology, and officials at others say they plan to.

A facial-recognition system breaks down the human face into 26 points of bone structure. It then converts these points to numbers and compares them with head shots in a databank.

The Fresno system was installed in October by Pelco Inc., the world’s largest maker of video security systems, whose headquarters are about a mile from the airport.

About 1,800 travelers a day go through the airport’s single security checkpoint. Even with screeners monitoring the process, the facial-scanning system fails to get a good photograph about 20% of the time: The traveler looked down or to the side or simply didn’t stop.

Of those whose faces have been successfully “captured,” the computer has sounded an alarm in one case out of 750. When that happens, security guards compare the traveler’s face with the matching image found by the computer. Usually, they are determined not to be the same.

Advertisement

But in one instance, involving a middle-aged, bearded man with a Middle Eastern appearance, guards agreed with the computer. The man’s bags were pulled from the plane and searched. The FBI was called in. It took the man several hours to convince authorities that he was not a terrorist. By then, he had missed the last flight to his destination.

Pelco paid for his dinner and hotel room and took him back to the airport in the morning.

“This is a test,” said Pelco Vice President Ron Cadle. “We owed it to him. But if this was a hard-core installation, forget it. We’re not running the Hotel Pelco here.”

Critics of facial recognition say a nationwide system would result in hundreds, if not thousands, of “false positives” each day.

A facial-recognition system went into operation last summer in the night-life district of Tampa, Fla., with 36 cameras searching a 16-block area for any of 300 felons sought by police.

The American Civil Liberties Union, using the Florida open-records law, recently obtained police records on the system. The ACLU discovered that during five weeks of operation, the system did not detect a single criminal suspect among the faces it scanned but did regularly make false matches.

The ACLU concluded that the system was intrusive from a civil liberties standpoint and useless for law enforcement. Tampa police quietly shut the system down in August, later blaming software bugs and other problems.

Advertisement

Despite the ACLU’s concerns, authorities are moving forward with an expanded version of the system. The database is being enlarged with photographs of 45,000 convicted and accused criminals, including rapists, murderers, robbers and burglars and a few sexual predators under court supervision.

An unexplored aspect of facial profiling is what will happen to the millions of images captured by the cameras. The Fresno system automatically erases pictures that don’t match any in its database. But the computer could just as easily store them.

Already, other databanks are being expanded beyond criminals and terrorists. In Tampa, dozens of runaways are being added. Ernie Allen, president of the National Center for Missing and Exploited Children, wants facial-screening systems to include pictures of the 2,000 children his organization is looking for. “It could be of enormous benefit,” he said.

Has Privacy Lost Its Constituency?

Embedded deep in the American soul is the importance of privacy. It’s enshrined in the Fourth Amendment, which forbids “unreasonable searches and seizures,” and in that most American of art forms, the western, which often centers on characters who want to be left alone in the wilderness. The image of an authority figure demanding “your papers, please” conjures up 1930s Germany more than the United States in 2002.

Yet reaction to the specter of national ID cards, facial scans and other forms of watching has been surprisingly muted.

Some observers say the unpredictable nature of terrorism demands a change in attitudes.

“We need security before we can squabble about where to draw the line,” said Adam Keiper, president of the Center for the Study of Technology and Society, a Washington think tank.

Advertisement

Others wonder if privacy has simply lost its constituency.

“There is a sense that privacy is a luxury we can no longer afford,” said Jeff Ubois, co-founder of Omnivia Policy Systems, a San Francisco software company that has struggled to find customers for privacy-enhancing e-mail technology.

“We may be heading for a time,” he said, “when the only thing about us that won’t be tracked in government files will be our video rental records”--shielded under the Video Privacy Protection Act of 1988--”and our guns.”

Technology is moving America toward the transparent society he fears--a world without anonymity.

In addition to scanning faces, software can extract other information from the reams of video recorded every day. Artificial-intelligence systems convert pictures and sound into computer files. The software can translate speech into text in at least eight languages, with more in development. Voiceprints, the vocal equivalent of fingerprints, can be recorded in any tongue.

The technology was designed for media companies such as CNN that need an efficient way to organize and search their video archives. But it is well-suited to surveillance too.

The images, text and voiceprints retrieved from video can be stored like credit card numbers or addresses. Authorities could search such a database to see if the face or voice of a terrorist had been recorded by a security camera somewhere. As video surveillance becomes more pervasive, such databases probably will become common.

Advertisement

New software developed by Virage Inc. of San Mateo, Calif., seeks to automate one of the most arduous aspects of video security: staring at monitors. The software is designed to analyze video footage and activate an alert if it detects something it has been programmed to recognize as suspicious: a particular face, location, word or phrase.

“Guards get tired. Machines don’t,” said technology analyst Paul Saffo.

U.S. intelligence agencies have used such software to evaluate the footage shot by unmanned surveillance planes soaring over the Afghan highlands, in which hours of empty landscapes might be punctuated by a fleeting glimpse of an Al Qaeda enclave.

This technology, combined with others, could be used to link widely scattered security systems. If a terrorist avoided detection at an airport, cameras at a bus station or toll plaza might spot him. Joseph Atick, chief executive of Visionics, a maker of software for facial-recognition systems, has spoken--with enthusiasm--of a “national shield” linking every camera in the country.

Databases Take on Added Importance

The emerging identity state relies not only on images but on billions of bits of personal, financial and legal information--the data stream of everyday life. Tools are being developed to mine and sift this data for signs of criminal intent.

For years, officials have been able to check thousands of online databases that hold police files and public records. In the post-Sept. 11 world, those archives are growing dramatically, partly because of the sweeping anti-terrorism laws pushed through Congress.

Spy agencies now can demand access to Internet and phone records without a subpoena. Surveillance software can capture a person’s every keystroke, including passwords that unlock encrypted files.

Advertisement

Artificial intelligence software in development is designed to look not just for obvious signs of suspicious activity, such as large bank withdrawals but for gaps or subtle peculiarities in the data. It might call attention to someone who has credit cards but no Social Security number.

By its very nature, such technology undermines traditional checks and balances.

“We’re collecting data on everyone on the assumption that anyone may be the next terrorist,” said Deirdre Mulligan, director of the Law and Technology Clinic at UC Berkeley. “This subverts our traditional notion of the ability of the government to survey its citizens” only if there is probable cause to suspect criminal conduct.

Another concern is that authorities have a natural inclination to believe what their computers collect, said Bruce Schneier, a computer security expert.

“You end up with a society in which the database is more important than reality,” he said.

The possibility that innocents will come under suspicion is more than theoretical. Ask Mark Deuitch.

On the Saturday after the terrorist attacks, the investment banker looked out his window and saw six FBI agents climbing from sport-utility vehicles outside his home in Boone, N.C., a mountain resort area. Two agents guarded his wife. The others interrogated Deuitch for two hours.

They came carrying a dossier gleaned from online repositories of motor vehicle, airline and credit information.

Advertisement

Judged solely on this data, Deuitch seemed a threat. He conducts business in the Middle East, the former Yugoslavia and other zones of conflict. He was scheduled to fly on Sept. 11 using a ticket bought by a Saudi client with a name similar to that of hijacker Mohamed Atta. And Deuitch once rented a house in Pompano Beach, Fla., near the residence of some of the hijackers.

Deuitch cooperated fully--relieved, he said, that the FBI was so aggressively pursuing terrorists. The agents left satisfied that he was clean.

But Deuitch’s name was not immediately dropped from the bureau’s list of people to question. That list was leaked to the media and soon circulated on the Internet--with Deuitch’s name still on it.

“I was being put at risk--myself, my family, my business reputation,” Deuitch said. “I do a lot of very sensitive work, and background checks are pretty much required. Potential clients don’t always come back and ask for clarification if there is something unusual on a record. It taints me.”

Using the Brain as a Lie Detector

“Brain fingerprinting” sounds like something out of science fiction, but some consider it a logical next step in security. In recent years, the FBI, CIA, Secret Service and other agencies have contacted neuroscientist Larry Farwell, who invented the technique.

Most of the agencies concluded that brain fingerprinting, though apparently feasible, had limited application, a Government Accounting Office study concluded. Sept. 11 has stimulated a fresh round of interest.

Advertisement

Farwell, chairman and chief scientist of Brain Fingerprinting Laboratories Inc. in Fairfield, Iowa, has developed what he says is a superior lie detector: a sensor-equipped headband that measures brain waves.

While wearing the headband, a subject is shown material that includes “irrelevants” and “probes.” Information in the first category establishes a baseline of response; in the second category are things known only to the examiner and the subject. With a murder suspect, it might be a picture of the crime scene. With an Al Qaeda member, it might be one of the group’s code words.

In either case, recognition would cause a spike in the brain waves. The guilty would be unmasked--in theory, at least. Though Farwell claims 100% accuracy in dozens of tests, brain fingerprinting hasn’t undergone rigorous independent evaluation.

Steve Kirsch, who founded the search tool InfoSeek, one of the most successful of the early Internet technologies, is a prominent proponent of brain fingerprinting. He goes even further than Farwell, arguing that it should be used not only to question criminal suspects but to screen the public at large.

Voluntarily, of course.

“America’s all about choice,” Kirsch said. “If you have your skull scanned and show you’re not a terrorist, you get to go through the fast-pass line.”

Assuming the technology is proved reliable, the choice might not be a choice forever.

“Once brain scanning becomes commonplace,” Kirsch said, “people will want to know: ‘Is this flight 100% screened or not?’ ”

Advertisement

The remark reflects a sense of inevitability that colors debate over the technologies of vigilance. With the high-tech economy suffering its worst recession in decades, companies are flocking to the security business.

They know that money will follow fear. A few skeptics worry that as the identity society becomes an important source of jobs and profits, it will begin to seem necessary, then normal, and finally invisible.

Such fears were sharply expressed in an essay in the influential Internet newsletter Nettime. How long will it be, the anonymous writer wondered, before “protester” becomes synonymous with “terrorist”?

There will be, he warned, “no piece of earth exempted from subjection to the identity apparatus.”

Advertisement