Google’s AI search engine under fire for misinformation

Barack Obama was the first Muslim president of the US. There are no countries in Africa which start with the letter ‘K.’ Pythons are mammals.

These are some examples of factually incorrect information highlighted by social media users, computer scientists and journalists using Google’s new “AI overview” tool. The new search engine feature based on a large language model (LLM), is intended to give an artificial intelligence-prepared summary based on the search terms inputted by a user.

The gaffs have caused Google to take “swift action” to improve the AI summaries.

AI Overview uses generative AI which is trained on large amounts of data – usually text – to produce new media, including text, images, audio and video. Generative AI has made waves in recent years with the development of ChatGPT and image-creating software.

It is based on the basic premise of sifting through large volumes of data to come up with the most likely extension of that data, hence creating new images or pieces of text.

Some users have raised concerns about the ethical considerations of the new developments, especially given the technology is still relatively new and there is little regulation of its use.

A common feature of generative AI is still that the algorithms suffer from “hallucinations.”

Hallucinations occur when the AI cannot figure out how to fill in the data, so it can occasionally insert incorrect information. This is often due to the algorithm not scrutinising the origin of the data, sometimes taking information from unverified social media posts or joke articles such as those published by the satirical online magazine, Onion.

It is not clear exactly what is causing Google’s AI Overview to present misinformation, but it will likely push back the rollout of the new tool.

Sign up to our weekly newsletter

Please login to favourite this article.