A leading researcher says the pace of development in artificial intelligence should be a “wake-up call” for Australian governments and media organisations.
Professor Simon Lucey, director of the Australian Institute for Machine Learning (AIML) at the University of Adelaide, says generative AI images and video are already so advanced, it’s becoming more and more challenging to tell real from fake content.
Lucey says he is pleased with the language around responsible AI from the Federal Government, and the establishment of a new AI expert group, but wants to see Australia develop its own AI capabilities to remain competitive globally.
“The time is now to act … we have this amazing country, we have these amazing Australian values. And if we don’t build up sovereign capability in this area, we are not going to be able to reflect those values back to the new generation of Australians that are coming through,” he says.
The difficulty in verifying AI versus human generated content poses a real risk for Australian newsrooms already struggling to keep up with AI developments, says Professor Monica Attard, co-director of the Centre for Media Transition at the University of Sydney.
Attard interviewed 20 editorial and production staff at 8 Australian newsrooms, including the ABC, Nine papers, The Guardian and others. Those leaders conveyed “extreme concern coupled with extreme excitement” about AI developments.
“They’re all thinking very, very deeply about how generative AI will challenge many of journalism’s fundamentals, they talked about massive upheaval, which of course journalism is very accustomed to … technology has done enormous good and enormous damage to the business.”
She says while AI brings some opportunities, there are significant risks for media organisations, including trust, copyright and the economic sustainability of journalism business models.
Newsrooms are cautious due to concerns about the integrity of information and don’t want reporters using ChatGPT, Attard says.
She says editors: “all talked about the fact that integrity was important to what they did, and that they were looking at ways to safeguard and retain the trust of the audience.”
AI ethics researcher, Rebecca Johnson, based at the University of Sydney says it’s important to remember that AI, like other technologies, is an “extension of people”.
“So our training data, the way we develop them, the way we design them, the way we deploy them, these are all decisions and reflections of people,” Johnson says.
That means AI is imbued with biases and perspectives of the people and companies developing them.
She says a lot of the dominant voices in the field – Sam Altman, Microsoft, Elon Musk – “you’ve got a lot of centralised money and power”.
“And so when people talk about value alignment for these technologies, you’ve got to ask, well, who’s whose values you’re trying to align to?”
She says “the general public is starting to get a better awareness of how what goes into these machines influences what comes out. But there’s actually a lot of other avenues that we get bias in these into these systems. And that can be through the way that the system is designed in architecture, the goals that we give it … even the way that you prompt it.”
Lucey, Attard and Johnson were speaking on a media briefing panel with Will Berryman, executive director of the Royal Institution of Australia titled ‘Deepfakes are here to stay – so where to next?’
Cosmos is published by the Royal Institution of Australia (RiAus). RiAus has signed an Memorandum of Understanding to provide more than 12,500 Australian science articles to AIML to be used in the development of a sovereign AI capability or tool.
For in-depth coverage of AI issues, read Cosmos reports on ethics, privacy and energy use.