Advertisement

In Chess, Qualified Respect for Computers

Share

Computing, as a science and an industry, has always been intimately connected with games, and with none more so than chess.

The quest to build a computer grandmaster has helped bring focus to computing research since the 1950s and was a major line of inquiry in artificial intelligence. Few advances in hardware came unaccompanied by parallel advances in computer chess; there was a program for IBM mainframes in 1958 and one for $10-million Cray supercomputers by the 1970s. Commercial programs contributed to the demand for the first personal computers in the 1980s.

So it’s not surprising that eight years after IBM’s Deep Blue chess computer defeated world champion Garry Kasparov in what was billed as the ultimate test of man vs. machine, experts still debate whether that match is computing’s last word on the subject -- and even whether the computer didn’t somehow, well, cheat.

Advertisement

The issue has been getting a new airing, thanks to an exhibit installed this month at the Computer History Museum in Mountain View.

Titled “Mastering the Game: A History of Computer Chess,” the exhibit chiefly covers the 50 years of efforts to teach a machine to play a quintessentially human pastime culminating in the Deep Blue-Kasparov match. The museum launched the exhibit with a two-hour panel discussion whose participants’ careers spanned the same period.

They were Edward Feigenbaum, a professor of computer science at Stanford University and a leading artificial-intelligence expert; John McCarthy, a Stanford emeritus professor who built a machine that defeated a Soviet computer chess team in 1965 (“The Soviets had a better program but a worse computer,” he told the audience); David Levy, a British grandmaster who won a series of bets against chess computers in the 1970s and 1980s that inspired programmers to refine their machines; and Murray Campbell, a member of the IBM team that built Deep Blue. (The team disbanded after its victory.)

The role of chess in artificial-intelligence research received special attention from the group. The field’s pioneers believed that the knowledge, judgment, learning ability and even psychology of a chess master could be replicated in hardware and software.

Chess appeared to be a well-defined, accessible challenge. “There was a pool of human experts, a rating scale [of players] so you could judge your progress, and it was difficult to solve,” Campbell told the museum audience.

He might have mentioned the further virtue that it’s not that difficult. The number of potential moves a computer has to consider in a chess game has been estimated at 10 to the 120th power (a one followed by 120 zeros). This is an immense number, vastly greater than the estimated quantity of subatomic particles in the universe. But it’s handily outstripped by the corresponding number in the Japanese board game Go, which has been estimated at 10 to the 761st power. Interestingly, computers haven’t come close to mastering Go.

Advertisement

In 1957, the artificial-intelligence pioneer Herbert Simon predicted that a machine would be chess champion of the world within 10 years. He was off by three decades. More important, however, his prediction of how computers would solve chess proved to be entirely wrong -- to artificial intelligence’s enduring chagrin.

Deep Blue’s victory “challenged the fundamental hypothesis that AI had grown up with,” says Feigenbaum. Rather than building on knowledge and experience to learn, human-style, Deep Blue prevailed by brute force: Its massive computing power allowed it to investigate up to 200 million possible positions per second and select the right one by measuring them against specifications preprogrammed by humans.

On the surface, the result resembled human cogitation and strategic planning closely enough to unnerve Kasparov. The world champion, who had been mentally prepared for battle with an entirely different sort of adversary, lost the second game of the six-game match and never recovered his poise.

Deep Blue’s strategic and imaginative qualities were illusory, however. “Deep Blue did not have an effective ability to learn,” Campbell said at the forum. “The entire machine was honed and tuned by hand.” Creating a program that learned chess as humans do, from teachers, books and their own playing experience, would be “well beyond the current state of the art,” he added.

For that reason, computing and artificial-intelligence professionals still seem to regard Deep Blue’s victory as somehow unsatisfactory, the cybernetic equivalent of a boxer who pulverizes every opponent by sheer muscle, but doesn’t have the slightest sense of technique or finesse.

McCarthy expressed this frustration when he threw out the idea of a man-machine tournament “with extremely stringent restrictions on the speed of computers,” like thoroughbreds forced to carry extra weight in a race. Under such conditions the computers “might be able to beat humans, but they would have to be clever.”

Advertisement

Nor do artificial-intelligence proponents consider Deep Blue’s brute-force victory a conclusive refutation of their theories. “We still have a lot to learn from chess playing,” Feigenbaum says. While a superhuman search capability and speed succeed in computer chess, he says, artificial knowledge and learning algorithms still succeed better than brute force in every other field examined by artificial intelligence.

Interestingly, the chess world seems to be more comfortable with anthropomorphizing Deep Blue’s victory than the computing world. Asked at the forum if the machine’s win was “legitimate,” Levy replied, “Most definitely.”

After all, he said, a chess match is partially psychological, and Deep Blue won on that battlefield.

“Kasparov’s psyche was destroyed,” Levy said. “In a six-game match, if your psyche is destroyed in game two, you’re not going to play well after that. It was a perfectly legitimate method.”

Golden State appears every Monday and Thursday. You can reach Michael Hiltzik at golden.state@latimes.com and read his previous columns at latimes.com/hiltzik.

Advertisement