AI killed the Turing Test. What should replace it?

Researchers have come up with a potential replacement to the Turing Test, to better help humans understand if a machine (or AI) is thinking or just regurgitating data.

A thought experiment by Alan Turing in the 1950s created what became known as the Turing Test, where an interrogator would try and determine which of two ‘players’ was a computer and which was a human.

In recent times, this test has been made redundant by AI.

It’s been defeated by using Tinder, by ‘cheating’, and by many of the Chatbots that are quickly becoming part of our lives.

So, with the Turing Test beaten, how can we tell if a machine really is thinking?

A paper published in the journal Intelligent Computing this month suggests that treating the machine as if it “were a participant in a psychological study”, would provide the answers.

The two researchers – Philip Nicholas Johnson-Laird of Princeton University and Marco Ragni of Chemnitz University of Technology – suggest a three-step framework the machine would undergo to give humans the answer about its ability to think for itself:  

  1. Testing in Psychological Experiments
  2. Self-Reflection
  3. Examination of Source Code

The first looks at its ‘inferences’ – a battery of test to try and determine whether it’s using human-like reasoning or standard logical processes.

The second – self reflection – tests the machine’s understanding of that reasoning. This is to see if it displays introspection. 

The team have an example of this. Asking the program: “If Ann is intelligent, does it follow that Ann is intelligent or she is rich, or both?”

A human would know that there’s nothing that suggests that Ann is rich. Computers on the other hand, would struggle to name why this is the case.

Thirdly, the researchers would dive into the source code, looking for ‘cognitive adequacy’. This is to try and sort ‘deep learning’ – which can be thought of like a black box – from true reasoning.

“In sum, we propose to replace the original Turing test with an examination of a program’s reasoning,” the researchers write in their new paper.  

“We treat it as a participant in a series of cognitive experiments, and, if need be, we submit its code to an analysis that is an analogue of a brain-imaging study.”

With AI is becoming intrenched in our daily lives, having a way to understand if and when machines are reasoning like a human – and therefore ‘thinking’ – is no longer just Turing’s thought experiment, but a real problem to solve.

Please login to favourite this article.