The world needs responsible AI

By Olivia Henry, the Australian Science Media Centre

Artificial intelligence (AI) is changing the way we live.

From your ‘up next’ playlist, to chatbots and even garbage trucks, AI is already in our homes and on our streets.

A report this week by the Australian Academy of Technological Sciences (ATSE) and the Australian Institute for Machine Learning (AIML) says it’s make or break time to embrace a responsible AI future.
Speaking to the AusSMC earlier this week, AI experts presented the report findings, which include a compilation of never-before-published insights from 13 Australian AI leaders.
“Just as the steam engine fundamentally changed the way that people lived and worked, AI is the steam engine of today, if you like,” says ATSE CEO, Kylie Walker.
Australia not only has the expertise, the industry, and the stability to lead AI development, according to Walker, but also the governance in place to make sure it’s done responsibly and inclusively.  
But what does this mean? According to the report, there is a growing awareness that AI systems can carry the biases of their creators, as well as the data used to train them.
This idea is echoed in recent research.

One study found AI image generators think 98% of surgeons are white and male, and another found AI-generated content consistently depicted men as strong, competent leaders, while women were often portrayed as emotional and ineffective.
As AI is increasingly used to support everyday processes from automated employment to medical care, responsible AI development should help to address our most pressing social challenges such as inequality, rather than add to them, says Professor Shazia Sadiq FTSE from the University of Queensland.
To add to this, concerns around consent and ownership of data is also an issue.
“Many current AI systems are trained using data from publicly available sources such as Wikipedia,” Sadiq says.
“This data is often collected without the explicit consent of the people who created the content. Creative industries in particular have a lot at stake here.”

What is degenerative AI?

But Director of the National AI Centre, Stela Solar, says responsible AI could help navigate these problems.
“Often AI has been placed in a binary or polarised discussion. AI is not a ‘yes/no’ question; AI is a ‘how’ question,” she says.
“To me, responsible AI is about the method of deploying, designing and developing AI systems to mitigate unintended consequences while creating value.”
Director of the Australian Institute for Machine Learning, Professor Simon Lucey, says the need for responsible AI is exciting, because it means AI has reached a point of maturity where the technology now needs guardrails to match society’s values and ethics.
“It’s really coming up! We’re seeing it in products, we’re seeing things like ChatGPT, we’re seeing autonomous vehicles, we’re seeing robots, we’re seeing new types of antibiotics that are being developed.”
Lucey added that Australia has a huge talent pool for AI, which could diversify our economy and hugely benefit our industries.
“There’s a real opportunity for the Australian Government to kind of lean into… actually having a big coherent strategy around ‘how do we get world class, responsible AI?’ We have all the pieces. But how can we put all these bits together? I think that’s another really exciting opportunity that lays before us,” he says.

Watch the full AusSMC News Briefing: Australia’s AI moment has arrived and it’s make or break time.

This article originally appeared in Science Deadline, a weekly newsletter from the AusSMC. You are free to republish this story, in full, with appropriate credit. 

Buy cosmos print magazine

Please login to favourite this article.