The Australian defence force chief has argued that democracies will be vulnerable to “truth decay” as AI and deepfakes increase misinformation.
In a speech at the Australian Strategic Policy Institute conference on Thursday, General Angus Campbell said disinformation could “fracture and fragment entire societies”.
“Healthy and functioning societies such as ours depend upon a well-informed and engaged citizenry,” he told attendees.
“Unfortunately, it is often said, we are increasingly living in a post-truth world where perceptions and emotions often trump facts.”
He also warned that as AI systems became better, they may be deployed by enemies of Australia to misinform the public.
“As these technologies quickly mature, there may soon come a time when it is impossible for the average person to distinguish fact from fiction, and although a tech counter response can be anticipated, the first impression is often the most powerful,” says Campbell.
“This tech future may accelerate truth decay, greatly challenging the quality of what we call public ‘common sense’, seriously damaging public confidence in elected officials and undermining the trust that binds us.”
In June, the Albanese government noted they were considering a ban on ‘high-risk’ uses of AI, particularly due to the risk of deepfakes and algorithmic bias. The EU has already undertaken their first regulations on AI earlier this year.
Experts are also worried.
“[Weaponised AI] currently allow for fake information to be spread, cloned images and false ‘truth’ to be propagated and can be used to support the development of cyber or kinetic warfare if left unregulated,” says SmartSat Professorial Chair in Cyber Security Professor Jill Slay AM.
“This can happen by deliberate misuse of the algorithms (or mathematical approaches) that underpin developed information or technology. However, controlled use of these techniques supports the development of useful technology and is beneficial to the Australian economy.”
While plenty of questions remain about the use of generative AI in society more generally, questions still remain about whether governments should – or even can – regulate these technologies.
“It stands to reason that any form of artificial intelligence (AI) deemed unsafe should not be used or deployed. The challenge, however, lies in defining and deciding what constitutes ‘unsafe’ AI,” says Professor Mary-Anne Williams from the University of New South Wales.
“The question of responsibility and regulatory oversight remains largely unanswered, with ambiguities persisting within scientific, engineering, educational, and legal spheres.”