When it comes to the risks and rewards of AI, are we focused on the wrong questions?
With an interim report released by the government yesterday looking at ‘responsible AI’, the director of the Australian Institute for Machine Learning (AIML) says we should be focusing on the concentration of power in the industry, rather than the AI itself.
“The government is in a very difficult position, trying to protect against these things that haven’t happened yet,” Anton van den Hengel, professor in machine learning and director of AIML, tells Cosmos.
“The technology itself isn’t really the risk, it’s the applications that the technology is put to.”
The interim report is a response to a consultation looking at ‘safe and responsible AI in Australia’.
Some suggestions in the report include labelling or ‘watermarking’ content that’s AI generated, or auditing AI systems.
“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” said Minister of Industry and Science Ed Husic.
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.
“The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.”
But van den Hengel says that the real issue is the small number of companies – mostly inside the US – controlling the industry.
“Seven of the 10 biggest companies are AI companies, and their GDP is larger than the majority of nations,” he told Cosmos.
“Those companies are all focused in America and China, they’re ‘post tax’ and […] you could draw the conclusion that they’re post democracy as well.
Should I stay and defend? Is building a fireproof home worth it? Get the bushfire facts you really want to know about. Listen now.
“I think it’s entirely reasonable that people are worried. They get a vote for their government, but they don’t get a vote for who runs the corporations that control their lives.”
Instead, van den Hengel suggests that governments need to be investing more into building AI capabilities in Australia, to allow us to better direct where we want AI to be used.
“How are we going to use this technology to save Aboriginal languages, or to help address the housing crisis, or to make the health system more efficient?” he says.
“We can’t just leave it to a bunch of American and Chinese companies to decide where this critical technology will be applied.”