The winters of AI’s discontent

The history of artificial intelligence (AI) has been characterised by a chronic failure to appreciate just how hard many of the problems it has tried to solve really are.

This is not because AI researchers are stupid. Rather, it is because our brains do such a good job of hiding from our conscious introspection just how good they are at certain tasks, and what algorithms they use to solve them.

Many of the problems which feel like hard work for us, such as proving mathematical theorems, were actually the first to be conquered by AI. It’s the problems that initially seem trivial to us, such as understanding a simple sentence or reconstructing a 3D scene from a 2D image, that have proved the most difficult for a machine to solve.

Following a few years after the groundbreaking work of Alan Turing on theoretical computer science, the modern era of AI started in the 1950s. From its outset, two competing approaches vied for attention.

One camp took inspiration from how the brain works. They tried to create AI using neural networks, simple processing units inspired loosely by how biological brains work. The key challenge was to find rules by which such networks could be trained to solve problems.

The other camp eschewed biological considerations, and instead tried to create AI by manipulating symbols using formal rules. With the hindsight of the 1980s, this was nicknamed GOFAI, or “good old fashioned artificial intelligence”.

Both camps chalked up early successes.  GOFAI researchers wrote a program that could prove mathematical theorems, perhaps the ultimate symbol-processing problem. On the other hand, neural network researchers showed how “perceptrons”, a simple type of neural network, could be trained from examples to solve simple classification tasks. 

However, in the following 20 years most of the successes were won by GOFAI. This made progress in, for instance, solving perceptual and reasoning problems in restricted domains such as the “blocks world”. Here vision (or rather, interpreting visual information), reasoning and action were combined to rearrange stacks of coloured blocks.

Such successes generated a great deal of hype and government funding. Unfortunately, though, the hype turned out to be unwarranted. Solutions for blocks worlds failed in more realistic settings. Neural networks fared little better, when it was proved mathematically that perceptrons could only learn solutions to uninterestingly simple problems. The real world was proving rather more complicated than people had realised.

The money therefore dried up, leading in the 1970s to the first “AI Winter”.

Although the AI community shrank dramatically, work still continued. The first renaissance was the rise in the 1980s of “expert systems”. Based on symbol processing, these were programs that encoded human knowledge and reasoning about a restricted domain. A classic example was diagnosing bacterial infections. Soon such “knowledge-based systems” were all the rage, and many companies started investing in in-house AI to improve their productivity.

The second renaissance of the 1980s was the realisation that it was possible to train much more general types of neural networks than perceptrons, and that these networks could, at least in principle, extract very sophisticated knowledge from examples.

AI was off and running again.

Unfortunately, many researchers again fell prey to the tempting belief that all problems in AI would now be solved very quickly. This turned out not to be the case. Money again became scarce, and by the 1990s the second AI Winter had arrived.

Nevertheless, progress continued steadily behind the scenes, and algorithms developed by the AI community penetrated many areas of life. It just wasn’t cool to call them AI. Indeed, it was commented at the time that AI was unsuccessful by definition, since when it did solve a problem it was called computer science instead.

This all sounds surprising given the current frenzy of interest in AI. What has changed over the past decade or so?

First, researchers discovered improved algorithms such as “deep learning”, which allows the training of neural networks with many layers. This has turned out to be particularly powerful when combined with “reinforcement learning”, which allows optimal actions to be learned from experience.

Second, neural networks thrive on training data, and the rise of the internet has led to an explosion in the amount of data available. Third, highly optimised computer hardware, originally developed to serve the gaming market, has turned out to be ideally suited for training large neural networks.

This combination of algorithms, data and hardware has so far been unstoppable. Neural networks now equal or exceed human performance in many tasks including pattern recognition and playing games such as Go, which had previously proved very difficult for AI. Critically, this performance depends on learning from experience, rather than the GOFAI approach of engineering knowledge into the system to start with.

AI might now seem poised to solve better than humans almost any problem involving “intelligence”. Significant challenges still remain however, such as how to learn without the vast amounts of data current algorithms require. Some leading AI researchers have recently argued that neural networks must not forget their roots in neuroscience. Our understanding of the brain is progressing very rapidly, but this new knowledge has yet to be translated into AI algorithms.

Furthermore, in a form of “learning from experience”, the history of AI teaches an important counterbalance to the current state of high excitement. Perhaps we really will now be able to create general AI which can solve all the problems that humans can solve, and more. Or perhaps we are just in the middle of another bubble, and winter is coming. Only time will tell.

Please login to favourite this article.