Why aren’t computers creative?

Physicist Roger Rassool looks at the great advances in creating 'clever' machines.

There’s no doubt that computer brains outdo us in many ways.

If you gave a troupe of monkeys a keyboard, and waited long enough, would they eventually type the works of Shakespeare?

No. Just to come up with “to be or not to be” would take a monkey about four million years. The monkey has, at best, a 1 in 26 chance of typing the first letter ‘t’ and the same for each subsequent letter. My guess is there is not enough time in the universe left for the monkeys to succeed at Shakespeare.

Jesse Anderson, an American programmer, has given computer monkeys a try. These are programs uploaded to Amazon servers that randomly compose nine letter strings of text. Millions of these virtual monkeys are mashing away at virtual keyboards. They have more of a chance because their efforts are not entirely random. They get feedback on their work. If they happen to type a nine-letter string that appears in Shakespeare, then that is selected for inclusion. They are reputed to have typed 99.99% of the works of Shakespeare, though not in order. But these monkeys aren’t composing the works of Shakespeare, they’re imitating them using a rule, a so-called algorithm. What we’re really asking here is: how is a human brain different from a computer? Why is it so? And, will it always be so?

There’s no doubt that computer brains outdo us in many ways. We can’t match them when it comes to numerical calculation, running complex machinery, or playing chess. Ask Gary Kasparov. Computers can even be programmed to compose music, though music experts claim they are not fooled.

Yet these computers are still performing the same sort of plodding task that the inventors designed them for – people like Charles Babbage, George Boole and Claude Shannon.

Back in the 1850s, Babbage, a philosopher as well as a mathematician and engineer, designed a programmable calculator that could be instructed by holes in cards, though it was never actually built. Around this time, George Boole, much to the dismay of his mathematical colleagues, came up with the idea that all of mathematics could be restricted to the two quantities, 0 and 1. In the 1930s, Claude Shannon, aged 21 and working on his Master’s thesis at MIT, realised that Boole’s binary logic could be represented by switching electrical circuits.

The rest, as they say, is history! The ‘digital’ computer was born. The size of the circuits shrank; every 18 months the number that could fit on a circuit board doubled as Gordon Moore noticed in 1965 (see Incurable Engineer, p31) so computer speed rocketed. Yet, no matter how fast they were, they were still ploddingly following a set of relatively simple rules. So what does it take to get a computer that can actually create something, rather than just follow a set of rules?

What it needs is no different than what a human child needs: experiential learning. From the moment of birth a human being gets feedback on every task they perform. Getting feedback certainly helped the computer monkeys move forward with their Shakespeare.

The star of the show on American quiz show Jeopardy is IBM’s Watson.
You can ask Watson questions in plain language, which means it really has to ‘think’.

A new breed of computers are now being built that not only get feedback on what they do; they are also able to learn from that feedback to change the algorithm they follow. And you find them everywhere. The brakes on modern trains are now clever enough to compensate for the number of passengers on board by sensing how the train reacts. As the train gets more crowded, the brakes come on earlier to ensure a smooth stop at the station. A computer-aided semi-trailer is equipped with sensors that monitor the angle of the trailer relative to the truck and try to prevent jack-knifing. GPS navigation systems can adapt to information like traffic conditions or road closure to pick the best route.

But the star of the show is IBM’s Watson. You can ask it questions in plain English or any other human language and it will answer you. That means Watson really has to “think”.

Natural human languages (unlike computer ones) are full of imprecise turns of phrases and double meanings. Watson has to make hypotheses about the likely meaning of sentences. Then it has to process terabytes of information like the whole of Wikipedia in a few seconds to make complex connections and give logical answers to questions.

A crucial difference between Watson and conventional machine learning is its ability to make inferences – connections that don’t already exist in its database. For example consider the following facts: plastic tubing can be used to make a water hose; water hoses carry water; water is heavier than air. Watson could infer that a ‘used’ water hose would be heavier than a new one because it may still be filled with water.

In 2011 Watson tested its abilities by competing with human champions on the American quiz show Jeopardy. It is horribly difficult. The contestants are given an answer that is usually a pun, and then they have to work out the question. This does not lend itself to the methodical searching of databases that computers routinely do to answer questions like “What is Roger Rassool’s phone number?” It’s the kind of reverse, intuitive thinking that previously we thought only a human could do. Watson won, taking home a prize of one million dollars!

Having proved Watson’s ability to win a TV game, the IBM team is now tackling a much bigger problem – the fight against lung cancer. Nurses provide the symptoms and crucial information about their cancer patients to Watson in everyday language. It then “diagnoses” each case and searches its database for the best treatments. Amazingly about 90% of the nurses agree with and follow Watson’s advice. And when they don’t Watson gets to learn more about what doesn’t work for next time.

Computers may never write the works of Shakespeare, but they are getting very clever – beating us on quiz shows, guiding us through traffic and now diagnosing cancer.

The challenge ahead is to think carefully about what we humans still do better.

Roger rassool 2014.png?ixlib=rails 2.1
Roger Rassool is a particle physicist at the University of Melbourne. His outreach programs have switched on a new generation to the wonders of physics.
Latest Stories
MoreMore Articles