Advertisement

New Dimension in Deception--the Computer

Share
Michael Schrage is a writer, consultant and research associate at the Massachusetts Institute of Technology. He writes this column independently for The Times

Times are tough, resources are scarce and office politics can be vicious. You need an edge. So should you program your trusty personal computer to lie for you?

At a time when computer networks are becoming corporate America’s preferred medium for management, the opportunities and temptations for digital deception are rising. Nothing so crass as a boldfaced electronic mail lie or a swindling spreadsheet, mind you, but rather a new class of artificial intelligence software is emerging where honesty definitely may not be the best policy.

Instead of requiring you to retrieve data and personally make arrangements over the network, software gurus from the Massachusetts Institute of Technology to Apple Computer to Sony are championing the concept of programmable “agents” designed to do all your digital drudgery. Think of them as your own super-secretaries, scurrying through your company’s computer network to do your bidding--making appointments, collecting information and cutting deals.

Advertisement

Let’s say you’re a busy executive who needs to schedule a key meeting of your staff and your firm’s chief executive. The software agent tells staff members that the meeting is at 10:45 a.m., sends them an agenda and alerts them to the latest memos on the meeting topics. When one staffer tries to excuse himself from the meeting, the software agent responds: “Just be there.”

Of course, just as most people don’t think twice about leaving little white lies on voice mail--”Awful sorry to have missed you . . .”--our software agents may be programmed--or learn--to lie for us too.

“If agents are going to be successful,” says Pattie Maes, an assistant professor at MIT’s media lab, “they’re going to have to act on your behalf the way human agents act on your behalf. . . . Your secretary eventually learns how you deceive--maybe that is not the right word--how you present yourself to the world; so should your software agent.”

Maes, a top researcher in this field, argues that software agents shouldn’t be denied the option of deception: “Agents should be able to lie the same way that people do,” she insists. “If we had truly honest agents, that would cause a lot of trouble.”

But allowing agents to cheat creates all kinds of provocative quandaries for corporations. Suppose software agents are competing with each other to obtain scarce resources, such as access to meeting rooms or people’s schedules? Suppose agents are being used to help sort out work loads and assign tasks? You have the inevitable tension between the agent doing what’s best for its “client” or doing what’s best for the organization.

Indeed, cheating and deception may prove to be the most successful strategy for the individual agent. Under many circumstances, says Hebrew University computer scientist Jeffrey Rosenschein, it makes perfect mathematical sense for software agents to generate decoys, phantom tasks and other false information to protect clients from unwanted work.

Advertisement

Not lying actually hurts the client. Eventually, the truly sophisticated agent won’t even have to be programmed to cheat--it will “learn” what to do by tracking its client’s choices.

So does the network actually “audit” its agents to see if they’re telling the “truth?” Does it forbid agents from sending false information? Or is the network designed in such a way that deception becomes “impossible”--or at least difficult? Do you count on ethics to get people and their agents to behave? Or do you count on technology?

“This becomes a social dilemma,” acknowledges Bernardo Huberman, a Xerox Palo Alto Research Center scientist who has built agent-based computer network simulations. “One general result we’ve had is that, as the group of agents gets larger, these problems of deception emerge.”

In other words, computer networks can break down when too many software agents pursue the best strategy for their individual clients. As our computer networks grow ever larger and interconnected, Huberman fears that the formation of agent gangs and coalitions will conspire to cheat and deceive other agent alliances. Deception will rule.

“Deception is destructive of trust and destroys confidence in both your relationships and in yourself,” says Sissela Bok, a philosopher and ethicist. “Once you begin to think of yourself as manipulative, you begin to think of yourself as less trustworthy.”

Bok maintains that deception is less social lubricant than personal poison. “A person is always responsible for the intentional deception that he or she originates,” she says. “Although with technology, people are always tempted to think, ‘It’s not me, it’s the machine’--that is self-deception.”

Advertisement

But Bok acknowledges that software surrogates add a different dimension to the ethical debate: “It will need thinking through where one stands in respect to deception and applying it to a new medium. We have to distinguish between deception in all its forms and actual lying.”

Indeed, Bok points to telephone answering machines as an example of a technology where people are now customizing their messages to avoid deception, saying, “I’m not available” instead of “I’m not here.”

But if networks are designed in ways that effectively reward cheating, who will be surprised when software surrogates deceive? And if agents are constrained from serving their clients, won’t they be subverted or ignored?

These are the ethical and economic questions that will haunt software designers and corporate managers.

Trust me!

Advertisement