Government’s new “AI guardrails” draw mixed response from experts

Ed husic in parliament next to speaker and anthony albanese
Minister for Industry and Science Ed Husic, in parliamentary question time on 14 May 2024. Credit: Tracey Nearmy / Stringer / Getty Images

The Australian Government has proposed 10 mandatory “guardrails” for high-risk AI development and use, with plans to regulate them in the works.

The government, which appointed an expert group to develop the guardrails, is now taking submissions on how they might be regulated.

The 10 guardrails include establishing and publishing accountability processes, appropriate governance for data quality and provenance, and enabling human control or intervention in AI systems.

The government says they have been issued as a voluntary safety standard for businesses to adopt, ahead of introducing them into either existing regulatory frameworks, new legislation on existing frameworks, or a new law specific to AI.

“This is a great step towards managing the risks of technology,” says Associate Professor Niusha Shafiabady, from Charles Darwin University’s faculty of science and technology.

Shafiabady says the guidelines contain “very good points”, such as the guardrail enabling human control and intervention.

“This would add a new layer to check the AIs’ outcomes before finalising the decisions that could potentially impact people,” she says.

Dr Tapani Rinta-Kahila, from the University of Queensland’s business school, agrees.

“With Robodebt (technically not AI but comparable), and in a similar case in the Netherlands (this was AI), we saw an absence of processes for people impacted by AI systems to challenge the system’s outcomes,” says Rinta-Kahila.

“In both countries, this led to the formation of grassroots social movements set out to help the affected people. The guardrails speak to all these issues very specifically and are thus a step in the right direction.”

But Shafiabady has concerns about some of the other guardrails, pointing out that some will be difficult to implement – for example the guardrail emphasising transparency of data, models and systems.

“Companies are protective of their data, algorithms and methods,” she says.

Dr Erica Mealy, a lecturer in computer science at the University of the Sunshine Coast, believes the guardrails’ emphasis on transparency and privacy is “problematic”.

“There’s no such thing as ‘explainable AI’,” says Mealy.

“Most, if not all of the major AI players, are international technology companies with no interest in keeping Australia’s data and intellectual property sovereign, and there is no way to make the training sets and decision algorithms both private and transparent or accountable to the Australian public.

“Transparency and accountability need visibility, while privacy needs confidentiality – these are competing interests at best.”

The government’s move follows artificial regulations proposed and adopted in other jurisdictions over the past year, such as the United Nations, European Union, and several US states.

While multiple experts have called the guidelines a step in the right direction, Dr Shaanan Cohney says they “do not add anything new”.

“The proposed guidelines are very high-level and as such are likely to create a compliance-driven culture rather than meaningfully improving practices,” says Cohney, a senior lecturer in the University of Melbourne’s faculty of engineering and information technology.

“Australia should be more careful when following the EU’s lead—their risk-based approach to regulation has yet to improve safety while imposing substantial extra costs.

“Regulating AI is necessary. However, our regulators would do well to act as intelligently as the products they are seeking to regulate.”

Professor Toby Walsh, chief scientist of the AI Institute at the University of New South Wales and an adjunct fellow at CSIRO’s Data61, welcomes the government’s proposal but warns Australia may miss out on opportunities without proper investment.

“The public is rightly concerned about how artificial intelligence is being deployed. It is a powerful technology with both positive and negative uses,” says Walsh.

“Compared to other nations of a similar size like Canada, we are not making the scale of investment in the fundamental science.

“Another concern is the speed with which this regulation is being developed. Good regulation takes time.”

Submissions from the public on the new guardrails will close on 4 October.

Sign up to our weekly newsletter

Please login to favourite this article.