Australian Government responds to privacy commissioner’s AI warning

The Australian Attorney-General’s Department is considering the privacy risks associated with artificial intelligence tools like ChatGPT in its review of the nation’s Privacy Act

The department says the government is committed to ensuring Australia has fit-for-purpose regulatory settings to address the challenges posed by AI.

Last week, Cosmos reported concerns from Victoria’s deputy privacy commissioner about risks associated with governments using large language models for emails, letters or reports if those documents contain any personal information.

“At the moment it’s not appropriate for governments to use these tools in normal government work,” said Rachel Dixon, Privacy and Data Protection Deputy Commissioner at the Office of the Victorian Information Commissioner.

Dixon also flagged particular concerns about companies like Microsoft and Google embedding AI tools into their enterprise software products, which are widely used by governments.

Responding to Cosmos questions about AI privacy risks, the Attorney-General’s Department says: “The Review of the Privacy Act 1988 considered the privacy risks associated with the use of new technologies, including practices such as web scraping, and made proposals to provide greater transparency and give individuals more control over their data.”

The department says it is engaging with technology providers and considering public feedback in relation to the review of the Act. 

A spokesperson for Victoria’s Department of Government Services says: “We regularly review AI technology, such as ChatGPT, to consider the ethical issues and necessary safeguards required for its potential use within government in the future.”

A range of policies to guide public servants are relevant to generative AI, the department’s spokesperson says.

The Victorian Government outlines its approach to public-sector use of digital technologies, including generative AI, in its Digital Strategy 2021–26.

Among its stated aims, the strategy intends to “streamline manual effort and repetition through the use of AI or robotic process automation”. It also seeks to develop ethical and safe guardrails for the use of personal data, AI and other emerging technology.

Please login to favourite this article.