Australia prompts humans to share their views on AI ethics and safety

Despite warnings of the societal-scale risks of artificial intelligence (AI), humans remain in charge of their own affairs – for now at least – and in Australia the government wants to know what people think about AI.

Consultation on Safe and responsible AI in Australia closes on Friday 4 August. Interested citizens can submit their views via an online survey or written submission.

Along with covering AI developments in science and research, Cosmos has been reporting on the social, ethical and environmental risks of generative AI technologies.

As one AI ethicist put it, “every day, there seems to be headlines about some new thing that raises interesting ethical questions”. 

A lot has happened in the realm of AI ethics. Writers and actors are striking in the USA. Creatives are taking technology companies to court over copyright. And governmentseducators and companies are blocking AI chatbots over privacy, data and safety concerns.

To recap the main points, Cosmos has put together a human-generated overview of AI issues.

Deepfakes, chatbots, machine learning, hallucinations … feeling the need to reboot?

Confused already? A good place to start might be the Cosmos Explainer: Unethical AI and what can be done about it. In this article, leading AI ethicists and researchers explain how and where problems arise in the design of AI systems – across their data, algorithms and applications – and explain the potential risks to privacy, fairness, accountability and transparency.

Alternatively, listen to Cosmos science journalist Petra Stock talk through the issues with host Dr Sophie Calabretto on The Science Briefing episode Artificial intelligence: who is responsible when AI goes wrong?

Read: Explainer: Unethical AI and what can be done about it.

Listen: Artificial intelligence: who is responsible when AI goes wrong?

AI prompts and privacy

As Rachel Dixon, Victoria’s deputy privacy commissioner explains, government agencies shouldn’t be using generative AI in their work due to privacy, accuracy and credibility risks.

Professor Jeannie Paterson, co-director of the Centre for AI and Digital Ethics at the University of Melbourne agrees the training data used in AI large language models poses privacy risks “because of what might be spewed out”.

Privacy is one of the issues flagged in the Australian Government’s consultation paper. Paterson says privacy is an issue government may need to weigh in on, given existing rights and legislation.

“We do need to sort of come up with a view about how much we value personal privacy and what we think is an appropriate trade-off between the development of technology and the importance of personal privacy,” she says.

Read: Using AI large language models for government work poses privacy risks, says Victorian deputy privacy commissioner

Read: Australian Government responds to privacy commissioner’s AI warning

If AI lives online, how can it harm humans and animals?

Australia’s eSafety commissioner says she has already received a number of complaints about non-consensual distribution of deepfake intimate images, and expects this type of abuse to grow in volume as AI technology becomes more accessible.

“Looking ahead, I’m concerned AI-related harms may morph and combine with those we’re also starting to see in the metaverse, especially harms affecting children,” Commissioner Julie Inman Grant says.

Read: Generative AI could automate sexual abuse and child grooming, eSafety Commissioner says

Harms to non-human animals tend to be neglected in discussions around AI and ethics, says University of Melbourne philosopher Dr Simon Coghlan, who researches animal ethics and technology. A paper co-authored by Coghlan and Melbourne Law School Professor Christine Parker provides a framework outlining the ways AI might harm animals, stepping through intentional and unintentional harms and foregone benefits.

Read: Researchers argue AI ethics should include animal welfare

Threat of extinction? What are the climate costs and opportunities of AI?

Machine learning and generative AI technologies also pose wider social and environmental risks, with some of these issues raised in the Chief Scientist’s Rapid Response Information Report on Generative AI.

Director of the Australian Institute for Machine Learning at the University of Adelaide Professor Simon Lucey explains the environmental impact of machine learning is a “double-sided coin”. On the one hand, training and using generative AI is energy-hungry and emissions-intensive. On the other, the technology can be deployed to help tackle climate change.

Read: Massive computers chewing up gigawatts of energy to support AI

Read: Addressing the massive climate and energy costs of AI

Register for scinema 2023

SCINEMA runs from August 1 to August 31 every year. Register now to be part of the festival and watch the films for free. REGISTER NOW.

Please login to favourite this article.