The Turing Test: herald of the smart machine, or just a publicity stunt?


Eugene, a chatbot, successfully passed himself off as a goofy kid from Ukraine. But, Cathal O'Connell asks, is this the milestone in artificial intelligence it appears to be?


Meet 13-year-old Eugene Goostman, who lives in Ukraine. In his spare time he enjoys computer programming and playing with Bill, his pet guinea pig. “I like play language cassettes for Guinean to my guinea pig,” he says with trademark humour in broken English. “My pig learned to say ‘grunt-grunt’, though I'm not sure that it is Guinean.”

Alan Turing, the father of computing, designed his famous test in 1950. – National Portrait Gallery, London

He could pass for just another goofy kid. And he did. But Eugene is actually a chatbot, a computer program designed to recognise keywords in a written message and respond in a way that passes for intelligible conversation. In June of this year, Eugene managed to convince 10 out of 30 judges that he was a human. In doing so, Eugene was declared the first computer program ever to pass the famous Turing test, a landmark of artificial intelligence laid down in 1950 by Britain’s Alan Turing, the father of computer science.

“This milestone will go down in history,” proclaimed Kevin Warwick, a computer scientist and the organiser of the contest held at the University of Reading in the UK. But the announcement of Eugene’s success attracted a wave of media interest as well as a barrage of flak. Was the Turing test really passed?

The deepest flaw with Turing’s test may simply lie in how
easily human beings can be tricked.

One major source of controversy is that Turing left the rules of his test open to interpretation. Turing did mention the ability to fool 30% of judges, but not as a bar for “passing” the test. He also made no mention of how long the conversation should be. The University of Reading contest, for instance, consisted of only a five minute conversation.

That set the bar way too low, says Kevin Korb, a computer scientist at Monash University. Passing the test, Korb and others argue, does not mean fooling 30% of the human judges after a brief chat – it means fooling 50% of them after a long conversation.

John Denning, one of Eugene’s creators and a programmer with the AI start-up Wholesale Change in California, argues that imposing such conditions is straying too far from Turing’s initial paper.Academics like to say the test is bunk. That the machine’s got to talk for longer and it’s got to talk about Breaking Bad or Downton Abbey,” he says. “That's moving the goalposts and it’s not right.”

The Eugene chatbot was also criticised for using his age and status as a non-native English speaker to skim over grammatical errors and lack of knowledge. “There’s no question that that makes the test a lot easier to ‘pass’,” says Korb.

But Denning finds accusations of trickery unfair. “Last time I checked, 13 year old Ukrainian boys are human,” he says.

But the deepest flaw with Turing’s test may simply lie in how easily human beings can be tricked.

Just over a decade after Turing proposed the test, a program called ELIZA had already convinced some people it was human. “They’d refuse to believe it wasn’t, even when people told them,” relates Graham Mann, an AI researcher at Murdoch University in Perth.

Mann is skeptical whether designing programs to pass the Turing test really moves the field of AI forward. “Most modern AI researchers don’t take it that seriously to be honest with you,” he says. “But boy it’s captured the public’s imagination and it does stimulate thought and discussion, which is no bad thing.”

This article is part of our special edition, Rise of the Robots. For more stories on AI and automation, click here.

  1. https://beta.cosmosmagazine.com/special-edition-robots-and-ai
  2. https://beta.cosmosmagazine.com/special-edition-robots-and-ai
Latest Stories
MoreMore Articles