People can learn to detect AI writing

Another week, another AI chatbot.

This weekĀ SnapchatĀ launchedĀ My AI, a customised version of OpenAIā€™s ChatGPT and Elon MuskĀ signalled his intentionsĀ to build one.Ā 

Artificial intelligence (AI) writing technology underpinned by large language models are certainly impressive. And theyĀ are creating a great deal of anxiety among writers, academics and people concerned about intellectual property rights.Ā 

So detecting AI text is important, and apparently itā€™s not that hard. 

A team of researchers at the University of Pennsylvania has demonstrated humans can learn to detect AI-generated text in aĀ peer reviewedĀ paper at theĀ February 2023Ā meeting of theĀ Association for the Advancement of Artificial Intelligence.Ā 

ā€œAI today is surprisingly good at producing very fluent, very grammatical text,ā€ says study co-author Liam Dugan. ā€œBut it does make mistakes. We prove that machines make distinctive types of errors ā€” common-sense errors, relevance errors, reasoning errors and logical errors, for example ā€” that we can learn how to spot.ā€

The study uses data collected usingĀ Real or Fake Text?, an original web-based training game.

The game begins with a sample of text written by a human. It then progressively adds text, one sentence or paragraph at a time, asking users to identify the point at which the machine takes over, and asks for reasons. 

The reasons people gave for guessing the author was a machine differed depending on the writing genre. Common sense was more likely to apply in recipes than news articles. Irrelevant material was more likely in short stories than speeches.

If the player selects ā€œmachine-generated,ā€ the game round ends and the true author – machine or human – is revealed.


Read more Cosmos coverage of AI chatbots:


The study results show that participants scored significantly better than random chance, providing evidence that AI-created text is, to some extent, detectable.

The study showed high variability in the skills of individual players.

Certain genres of writing were easier to detect than others. For example players spotted AI generated recipes more readily than stories or news articles. The study says thatā€™s because contradictions were easier to spot and recipes often assume implied knowledge, something language models struggle to get right.

ā€œOur method not only gamifies the task, making it more engaging, it also provides a more realistic context for training,ā€ says Dugan. ā€œGenerated texts, like those produced by ChatGPT, begin with human-provided prompts.ā€

The game teaches players the kinds of errors which characterise AI chatbots. You can try it yourselfĀ here.

Please login to favourite this article.