Lie-detecting AIs could make people accusative

AI programs designed to detect lies could make people more willing to accuse others of dishonesty, according to new research.

The study, published in iScience, found that people who requested an AI’s judgement on whether something was true or false were then very likely to trust the AI.

“Our society has strong, well-established norms about accusations of lying,” says senior author Nils Köbis, a behavioural scientist at the University Duisburg-Essen, Germany.

“It would take a lot of courage and evidence for one to openly accuse others of lying. But our study shows that AI could become an excuse for people to conveniently hide behind, so that they can avoid being held responsible for the consequences of accusations.”

The researchers trained an algorithm on written data from 986 volunteers, each of whom were asked to make a true statement and a false statement about what they planned for their weekend.

This algorithm was able to then spot true and false statements 66% of the time: notably better than how humans normally do, which the researchers say is rarely better than simple chance.

The researchers then asked 2,040 more volunteers to read written statements and decide whether they were true or false.

Participants were divided into 4 groups: a control group, where they received no AI assistance, a forced group, where they were always told the AI’s judgement on the statement before making their own judgement, a choice group, where participants could request and receive an AI judgement, and a blocked group, where they could request a judgement but would not receive one.

In the forced group, participants were significantly more likely to accuse a statement of being false than those in the control and blocked groups. If the AI said a statement was true, just 13% of respondents said it was false, and if the AI said it was false, 40% of respondents said it was false.

Just one third of participants in the choice group requested an AI judgement.

“It might be because of this very robust effect we’ve seen in various studies that people are overconfident in their lie detection abilities, even though humans are really bad at it,” says Köbis.

But of those who did, they were very likely to trust it: when an AI said a statement was false, 84% of participants in that group agreed with it.

“It shows that once people have such an algorithm on hand, they would rely on it and maybe change their behaviours. If the algorithm calls something a lie, people are willing to jump on that. This is quite alarming, and it shows we should be really careful with this technology,” says Köbis.

In their paper, the researchers say there is an “urgent need for a comprehensive policy framework” to manage lie-detecting AIs. They state that there are concerns for legal liability, public trust, and the consequences of false accusations.

 “There’s such a big hype around AI, and many people believe these algorithms are really, really potent and even objective. I’m really worried that this would make people over-rely on it, even when it doesn’t work that well,” says Köbis.

Sign up to our weekly newsletter

Please login to favourite this article.