The Australian Institute which is trying to tame the AI beast

The Australian Institute which is trying to tame the AI beast

Modern AI is amazing,” says Simon Lucey. “It’s something to behold. Nobody thought they could scale to the heights they are at. But – at the end of the day – they’re still glorified ‘if-this-then-that’ machines.”

It’s an honest appraisal from the director of the Australian Institute for Machine Learning (AIML) and professor in the University of Adelaide’s School of Computer and Mathematical Sciences, who lists his interests as computer vision, machine learning, and robotics.

“It’s a technology based on brute force,” he told Cosmos.

AIML is Australia’s first institute dedicated to research in machine learning. It formed in early 2018 from the Australian Centre for Visual Technologies (ACVT), with funding from the South Australian state government and the University of Adelaide.

Person smiling at camera
Simon Lucey. Credit: AIML

It has contributed hundreds of millions of dollars to the University’s research income, and helped it move to #2 in the international ranking for Computer Vision Research from CSRankings, ranking computer science institutions around the world.

It has wide interests but its three key pillars are machine learning, artificial intelligence and computer vision.

AIML has around 200 “members,” ranging from leading academics, postdoctoral researchers, postgraduate students and scholarship holders, a full-stack engineering team and a small team of professional staff.

AIML is one of the largest machine learning research groups in Australia, and claims to be “one of the best in the world for computer vision.”

It has partnered Microsoft and worked with Amazon but South Australian Government funds allowed it to set up its own engineering team. Local businesses could access ‘free’ engineering hours to build high-tech industrial software solutions.

One of its bigger clients was Rising Sun Pictures. AIML built the AI for movies like Elvis and a number of Marvel films.

Open office space
The interior of AIML. Credit: AIML

The University of Adelaide research students and AIML engineers used their spare time to compete in  the 2022 Learn-to-Race Autonomous Racing Virtual Challenge, beating more than 400 international competitors to secure first and second place finishes in their categories.

A milestone in autonomous vehicles will be reached when AI enables a vehicle to understand its environment.

The AI which has taken the world by storm in recent years, such as ChatGPT, Google Bard and Alexa, are called Large Language Model (LLM) systems.

Presently the focus seems to be on an LLM absorbing an entire internet’s worth of knowledge – and packaging everything into an algorithm. And it’s about the enormous computing power needed to do this.

LLMs can write an essay on Shakespeare’s “A Midsummer Night’s Dream” because they’ve read everything there is to read about the subject – and can mix and match those details into yet another version. They can even mimic the different writing styles they’ve seen if requested.

These can translate between very different languages. That’s because they can sift through an internet-scale list of examples and average out what’s most commonly applied in similar phrases.

They can access your entire internet history, interpolate your preferences and habits – and tailor search, social media and advertising feeds accordingly.

Language translators will produce hilarious – and embarrassing – faux pas. The AI doesn’t understand context or nuance.

And AI content creators frequently get duped by false information or make incorrect connections.

“There’s no comprehension,” says Lucey. “There’s no reasoning.”

AI is taught in very different ways to human children. And that may be part of the problem.

“We don’t learn to read by going through trillions of pages of text, but that’s how we’re teaching AI to read at the moment.

“Similarly, we don’t learn to recognise what we see by going through billions of images.”

And while memorising the internet gives powerful systems such as Chat GPT immense pattern recognition, it hasn’t produced perception.

“What’s really amazing is that even though ChatGPT 4 can do all these amazing things – it can’t multiply!”

“That’s because it learns by rote – prodigious memorisation.

“But it doesn’t understand what it’s seeing. It can’t figure out the rules behind them.”

But rote learning is precisely what much AI research is counting on.

“So there’s this thesis – I think a lot of companies are banking on it – if I just get enough data and enough compute, something called ‘emergence’ will just occur. That somehow these machines will get that much more intelligent.

“Now, there’s a problem with that thesis. First off, it’s extremely inefficient. It’s extremely costly. You also need to collect huge amounts of data.

“That’s pretty much limited to national superpowers and large multinational corporations. And they’re in some ways coming up against the limits of that process already.”

At its core, machine intelligence is a set of step-by-step instructions. If this, then that.

“People worked out decades ago there are plenty of intelligent tasks that can be programmed – bake a cake, for instance,” says Lucey.

Even though ChatGPT 4 can do all these amazing things – it can’t multiply!

Simon Lucey

The trick behind LLM AI is that it attempts to memorise every recipe for everything.

It bundles every example it has ever seen into an algorithm. And the more it sees, the more variation that algorithm can encompass.

“That’s what we see with ChatGPT at the moment,” Lucey explains. “With ChatGPT 2 versus 3, the algorithm itself is basically the same. The only thing that has changed is the amount of data and computer power used to expand it.”

Big data. Big compute. Big dollars.

“Only a couple of companies in the world can afford to create these large models, like OpenAI and DeepMind. And it’s only getting more difficult.”

But despite this brute force, big data approach, LLM algorithms are yet to produce a trustworthy, fully autonomous car.

“When you look at humans, there are a lot of times where we rote memorise. But there are also a lot of things we are somehow able to generalise. We don’t need trillions of hours behind the wheel of a car to drive [it].

“We definitely make mistakes. But we know if a kid jumps out onto the road, we must stop!”

“When AI sees something it has never seen before, it can fail miserably,” Lucey explains.

“And, in many ways, that is the big divide between human and machine intelligence.”

Lucey says new approaches are needed to produce different kinds of intelligence tailored to learn in specialist environments. And a fringe benefit of this is bypassing the need for expensive “big data, big compute” techniques.

That’s what the AIML aims to capitalise upon.

Space exploration is one example.

There’s no internet-wide source of raw material to cram trends and averages from for living or operating on the moon into an algorithm. So any rover will need to be able to adapt and learn fast from its own experiences, and those around it, without accessing a supercomputer.

To do that, it needs a new skill: The ability to reason.

What machines need, he says, is “big picture” judgement.

Here, many small pieces of information can trigger different neural trails to produce a coherent – if not entirely complete – picture. It forms an expectation out of the available facts.

A rationalisation.

“And that’s where we make our broader thinking decisions,” he adds.

New Machine Deep Learning techniques emulate neural networks.

“This is our side door to AI,” says Lucey. “We’re not just saying we can’t compete with the big guys. AI needs to reason – not just because it’s cheaper and easier for Australia to do so. It’s because it’s the only way we’ll solve some of the tough problems we will encounter in the 21st century”.

Please login to favourite this article.