AI deepfakes have us questioning everything we see – can science and journalism help?

By Rachel McDonald, the Australian Science Media Centre

When social media platforms were recently flooded with deepfakes of Taylor Swift, it seemed like AI had hit the big time as the anti-hero of misinformation. Altered images have been part of the media landscape for decades, however, the recent rise of artificial intelligence has meant we can create faces that now appear more real than human faces

Professor Simon Lucey, Director of the Australian Institute for Machine Learning (AIML), says that while fake images were nothing new, the use of deep learning artificial intelligence techniques had led to an exponential increase in their use.

“The ease and the magnitude of the technology in terms of being able to manipulate things has just exploded,” he said.

Executive Director of the Royal Institution of Australia (RiAus) Will Berryman said it was getting increasingly difficult to distinguish between real and fake images, and even the smallest alterations to images were being used to manipulate the way information is received. The RiAus publishes the science magazine COSMOS and online science news every day and has been grappling with the issue of what AI will do to journalism and truth. 

“When it can be done maliciously, when it can be done at scale, I think that’s very troubling for law enforcement, very troubling for the way ordinary people form their opinions and act, it could be another supercharged avenue of misinformation that overwhelms us,” he said.

Lucey says organisations including the AIML were working on potential ways to ‘watermark’ images – to cryptographically embed information about the source of an image to help people verify it further down the line – however technical solutions like that can only go so far when people are actively seeking to deceive.

“It’s useful for the good actors – the people who want to be responsible,” he said.

“And so there’s always going to be this, unfortunately, fake content or deep fake content that seeps through.”



Berryman says as the public deals with new challenges in determining reality from fiction, journalists had the potential to help by prioritising their role as curators of verified facts. 

“Over 30 years, curated media has fallen away, in place of information everywhere, and information at our fingertips. And I think we threw curation away too quickly, as a societally important thing,” he said.

Professor Monica Attard from the University of Technology Sydney recently participated in a project involving interviews with 20 editorial and production staff from a range of major Australian newsrooms about how AI was influencing the news industry.

AI – next step is reasoning


She says the feedback they received was that newsrooms were struggling to keep up with the pace of technological advancement, but many editors also believed journalists could play an important role in curbing the dominance of misinformation.

“They saw an opportunity for the news media organisations to produce quality news, and overturn or impact the trust deficit that we’ve seen creep into journalism in the last 10-15 years,” Attard says.

On the broader topic of AI, she says while there was anxiety, especially among younger journalists, about the potential for job losses, many of the editors interviewed said they hoped AI could one day take over more routine roles such as weather reports or headline generation, freeing up journalists to focus on generating content where a human touch is more important. However, she said right now it is hard to rely on even the most basic of information generated by AI.

“It complicates the process of verification in journalism 100 fold, how can you verify when you’ve got bits and pieces of information coming from various different unnamed sources?” she said.

Attard says right now, newsrooms were generally avoiding the use of AI, believing there are not the tools available yet to do so safely.

“The wisest advice that we heard from the editors was until we have those guardrails in place, which can only come about through conversations with the manufacturers and the platforms that licenced the technology, until we’ve got those guardrails in place, that it’s just not a trustworthy enough technology for the purposes of journalism,” she said.

AusSMC media briefing

Please login to favourite this article.