Advertisement

Did a computer finally pass the Turing Test? Signs point to ‘no.’

Share

The world of high-tech hype was thrown on its ear over the weekend by the news that a computer had passed the “Turing Test” for the first time. The idea was that a machine had finally breached a threshold distinguishing human intelligence from artificial intelligence.

Well, no. The truth is rather more modest.

The supposed victory of the program known as “Eugene Goostman,” which is designed to simulate the responses of a 13-year-old boy from Odessa, Ukraine, was accepted at face value by tech journals and newspapers around the world, including The Times. But the truth is rather more modest.

First, what is the Turing Test? The concept derives from a prediction made by the pioneering computer scientist Alan Turing in an article in October 1950: that in another 50 years, computers would do so well at imitating human thought that “an average interrogator will not have more than a 70% chance of making the right identification after five minutes of questioning.” (Eugene supposedly was mistaken for a human just over 30% of the time, exceeding Turing’s standard.)

Advertisement

Over time, Turing’s philosophical concept morphed into a formalized “test.” More recently, it has become a sort of technology wonk parlor game in which engineers write scripts that typically allow their programs to pick keywords out of questions posed by the judges, and offer potted replies that sound like direct answers, distract the judges with tangents, or (when no keyword can be picked out of the question, say) profess perplexity.

Programs that can fool a high percentage of judges into thinking they’re human haven’t been all that rare; a rundown of the historical record can be found here. In 1989, a program named MGonz fooled a caller to its creator’s office into thinking he was talking to a human and kept him on the phone for an hour and 27 minutes, largely by distracting the caller with programmed truculence and profanity. A transcript of the full not-safe-for-work but very amusing conversation is here.

So the announcement of Eugene Goostman’s triumph issued by Britain’s University of Reading, which sponsored the latest test, incorporated a sizable measure of hype. (Its first release described the victor inaccurately as “supercomputer Eugene Goostman.” The university later corrected the release to read “computer programme Eugene Goostman.”) It breathlessly quoted Kevin Warwick, who organized the event, as warning that “having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.” The real crime threat now as always, of course, is persons tricking us into thinking they’re persons we can trust.

As Mike Masnick of TechDirt observes, Eugene’s programmers put their thumbs on the scale by depicting their creation as a 13-year-old Ukrainian boy, which predisposed the judges to treat stagy or inappropriate responses as artifacts of adolescent inexperience and the language barrier.

Although the transcripts of the Reading event aren’t available, computer engineer David Auerbach examined an earlier trial to show that an experienced Turing judge would easily see through Eugene’s “psychological smoke and mirrors.”

In any case, treating the Turing Test as a sort of circus act is to misunderstand Turing’s point.

In the words of Turing’s biographer, Andrew Hodges, his idea of a “learning” machine — his grail as a computer scientist — was one that could “produce more ideas than those with which it had been fed.” Turing himself dismissed the importance of his own argument: “The original question, ‘Can Machines Think?’ I believe to be too meaningless to deserve discussion,” he wrote in his original article.

Advertisement

But Turing did believe that a machine that could meet his specifications would do so with the breadth of language and ideas of a generally educated person. In that light, deliberately lowering the questioners’ expectations of the program, as Eugene’s creators did by posing it as a 13-year-old Odessa native, defeats the very purpose of the test. True artificial intelligence, which involves a process that Turing considered “learning,” may not be unattainable, or even far off. But “Eugene Goostman” didn’t display it.

Keep up to date with The Economy Hub by following @hiltzikm.

Advertisement