Computers ‘cheat’ the Turing test by staying silent

120716 turingtests 1
Mathematician Alan Turing’s test for artificial intelligence doesn’t hold up if a machine – or anything – takes the Fifth Amendment.
BILL SANDERSON / Getty Images

What happens if a computer exercises the right to remain silent during a Turing test?

This is the question posed by Kevin Warwick and Huma Shah at Coventry University in the UK, who found the Fifth Amendment of the US Constitution – that no person can be compelled to testify against him or herself – to be a stumbling block in the classic test for artificial intelligence.

The claim was published in the Journal of Experimental and Theoretical Artificial Intelligence.

In 1950, famed English mathematician Alan Turing devised a test to determine if a computer could “think” well enough to fool humans.

His “imitation game”, commonly referred to now as the Turing test, involves a human judge conversing, via typed words, with two entities – one human, one computer. Each “conversation” lasts five minutes, and at the end of the 10 minutes, the judge must say which of the two is the human and which is the machine – or, if it is the case, “unsure”.

If a computer was incorrectly labelled “human” or “unsure” three times or more out of 10, it passed the test.

Central to the imitation game is a computer’s ability to respond appropriately to questions. But what if it stayed silent? When faced with silence, how would a judge know if he or she was conversing with a machine that took offence at a question, and decided the best course of action was to stay silent – or a brick?

Warwick and Shah outlined four examples where a computer, taking part in a Turing test, malfunctioned and either stayed quiet for the whole conversation or started “speaking”, but soon stopped.

In these cases, the judge was unsure if the machine was, indeed, a machine. Check out this conversation from 2014, when the chat bot Cleverbot stopped working:

[10:58:08] Judge: good day
[10:58:08] Cleverbot:
[10:58:46] Judge: is no response an answer
[10:58:46] Cleverbot:
[10:59:35] Judge: am i not speaking you’re [sic] language
[10:59:35] Cleverbot:
[11:00:25] Judge: silence is golden
[11:00:25] Cleverbot:
[11:01:32] Judge: shhh
[11:01:32] Cleverbot:
[11:03:07] Judge: you make great conversation
[11:03:07] Cleverbot:

One way to get around this, the pair suggest, is to set a time limit on responses. If a judge doesn’t get an answer or the entity doesn’t start typing within 15 or 20 seconds, the conversation is classified as being null and void.

But they point out a machine might be programmed to press a certain string of letters, then the delete key to wipe it all out, to get around this time limit.

Warwick and Shah stress that if a machine passes the test by chatting back to the judge, they consider it to be a “nice philosophical problem” that they “do not wish to tinker with here”.

But when a machine passes the test by staying silent, “this not be seen to be any indication at all of a thinking entity; otherwise, we must necessarily agree that stones and rocks think”.

Please login to favourite this article.