In Adelaide they’re trying to build a deep learning machine that can reason

Large Language Models burst onto the scene a little over a year ago and transformed everything, and yet it’s already facing a fork in the road….more of the same or does it venture into what is being called “deep learning?”

Professor Simon Lucey, the Director of the Adelaide-based Australian Institute for Machine Learning believes that path will lead to “augmented reasoning.”

It’s a new and emerging field of AI that combines the ability of computers to recognise patterns through traditional machine learning, with the ability to reason and learn from prior information and human interaction.

Machines are great at sorting. Machines are great at deciding. They’re just bad at putting the two together.

Part of the problem lies in teaching a machine something we don’t fully understand ourselves: Intelligence.

What is it?

Is it a vast library of knowledge?

Is it extracting clues and patterns from the clutter?

Is it “common sense” or cold-hard rationality?

Machines are great at sorting. Machines are great at deciding. They’re just bad at putting the two together.

The Australian Institute for Machine Learning’s Professor Simon Lucey says it’s all these things – and much more. And that’s why artificial intelligence (AI) desperately needs the ability to reason out what best applies where, when, why and how.

“Some people regard modern machine learning as glorified lookup tables, right? It’s essentially a process of ‘if I’ve got this, then – that’.”

“The amazing thing,” Lucey adds, “is that raw processing power and big-data deep learning have managed to scale up to the level needed to mimic some types of intelligent behaviour.

“It’s proven this can actually work for a lot of problems, and work really well.”

But not all problems.

“We’re seeing the emergence of a huge amount of low-risk AI and computer vision,” Lucey says. “But high-risk AI – say looking for rare cancers, driving on a city street, flying a combat drone – isn’t yet up to scratch.”

Existing big data and big computing techniques rely on finding the closest possible related example. But gaps in those examples represent a trap.

“There’s all these scenarios where we are coming up against issues where rote memorisation doesn’t equate to reasoning,” Lucey explains.

Action. Reaction. Reason.

The human brain has been called an average machine. Or an expectation generator.

That’s why we make so many mistakes while generally muddling our way through life.

But it’s a byproduct of the way the networks of neurons in our brains configure themselves in paths based on experience and learning.

This produces mental shortcuts. Expectation biases. And these help balance effectiveness with efficiency in our brains.

“Intelligence isn’t only about getting the right answer,” says Lucey. “It’s getting the right answer in a timely fashion.”

For example, humans are genetically programmed to respond reflexively to the sight of a lion, bear – or spider.

“Intelligence isn’t only about getting the right answer. It’s getting the right answer in a timely fashion.

Simon Lucey

“You aren’t going to think and reason,” he explains. “You’re going to react. You’re going to get the hell out of there!”

But evolution can lead to these mental shortcuts working too well.

We can find ourselves jumping at shadows.

“Which is fine, right?” says Lucey. “Because if I make a mistake, it’s okay – I just end up feeling a bit silly. But if I’m right, I’ll stay alive! Act quick, think slow.”

Machine intelligence is very good at doing quick things like detecting a face.

“But it’s that broader reasoning task – realising if you were right or wrong – where there’s still a lot of work that needs to be done.

Back to the ol’ drawing board

“Biological entities like humans don’t need nearly as much data as AI to learn from,” says Lucey. “They are much more data-efficient learners.”

This is why a new approach is needed for machine learning.

“People decades ago realised that some tasks can be programmed into machines step by step – like when humans bake a cake,” says Lucey. “But there are other tasks that require experience. If I’m going to teach my son how to catch and throw a ball, I’m not going to hand him an instruction book!”

Machines, however, can memorise enormous instruction books. And they can also bundle many sets of experiences into an algorithm. Machine learning enables computers to program themselves by example – instead of relying on direct coding by humans.

How do I produce the rules behind an experience? How can I train AI to cope with the unexpected?”

Simon Lucey

But it’s an outcome still limited by rigid programmed thinking.

“These classical ‘if-this-then-that’ rule sets can be very brittle,” says Lucey. “So how do I produce the rules behind an experience? How can I train AI to cope with the unexpected?”

This needs context.

For example, research has shown babies figure out the concept of “object permanence” – that something still exists when it moves out of sight – between four and seven months of age.

And that helps the baby to move on to extrapolate cause and effect.

“With machines, every time the ball moves or bounces in a way not covered by its set of rules – it breaks down,” says Lucey. “But my kid can adapt and learn.”

It’s a problem facing autonomous cars.

Can we push every possible experience of driving through a city into an algorithm to teach it what to expect? Or can it instead learn relevant rules of behaviour instead, and rationalise which applies when?

‘How to think, not what to think’

Albert Einstein said: “True education is about teaching how to think, not what to think.”

Lucey equates this with the need for reasoning.

“What I’m talking about when it comes to reasoning, I guess, is that we all have these knee-jerk reactions over what should or should not happen. And this feeds up to a higher level of the brain for a decision.

“We don’t know how to do that for machines at the moment.”

The problem with current machine learning is it’s only as good as the experiences it’s been exposed to.

Simon Lucey

It’s about turning experience into knowledge. And being aware of that knowledge.

“The problem with current machine learning is it’s only as good as the experiences it’s been exposed to,” he says. “And we have to keep shoving more and more experiences at it for it to identify something new.”

An autonomous car is very good at its various sub-tasks. It can instantly categorise objects in video feeds. It can calculate distances and trajectories from sensors like LiDAR. And it can match these – extremely quickly – with its bible of programmed experiences.

“It’s working out how to connect these different senses to produce a generalisation beyond the moment that AI still struggles with,” Lucey explains.

The AIML is exploring potential solutions through simulating neural networks – the interconnected patterns of cells found in our brains.

In the world of AI, that’s called Deep Learning.

Building better brains

Neural networks don’t follow a set of rigid “if this, then that” instructions.

Instead, the process balances the weight of what it perceives to guide it through what is essentially a wiring diagram. Experience wears trails into this diagram. But it also adds potential alternative paths.

“These pieces are all connected but have their own implicit bias,” says Lucey. “They give the machine a suite of solutions, and the ability to prefer one solution over another.”

It’s still early days. We’ve still got a lot to learn about deep learning.

“Neural network algorithms are great for quick reflex actions like recognising a face,” he adds.  “But it’s the broader reasoning task – like ‘does that reflex fit the context of everything else going on around it’ – where there’s still a lot of work that needs to be done”.

The AIML has a Centre for Augmented Reasoning.

The reasoning we’re trying to explore is the ability for a machine to go beyond what it’s been trained upon.

Simon Lucey

“I think the big opportunities in AI over the next couple of decades is around creating data-efficient learning for a system that can reason,” Lucey explains.

And the various AIML research teams are already chalking up wins.

“We’ve successfully applied that approach to the autonomous car industry. We’ve also had a lot of success in other areas, such as recognising the geometry, shape and properties of new objects.”

That is helping give machines a sense of object permanence. And that, in turn, is leading to solutions like AI-generated motion video that looks “real”.

The motive behind it all is to give AI the ability to extrapolate cause and effect.

“The reasoning we’re trying to explore is the ability for a machine to go beyond what it’s been trained upon,” says Lucey. “That’s something very special to humans that machines still struggle with.”

Please login to favourite this article.