Michael Weldon/The Jacky Winter Group
Feature
22 Sep 2014

ArtificiaI intelligence: Your future today

Automated systems are driving cars, beating humans at game shows, consulting as cancer specialists and acting as pocket personal assistants, Elizabeth Finkel discovers that AI is already here and, far from being terrified of it, we love it.

Michael Weldon/The Jacky Winter Group

THEODORE: I wish you were in this room with me right now. I wish I could put my arms around you. I wish I could touch you.

SAMANTHA: How would you touch me?

THEODORE: I would touch you on your face with just the tips of my fingers. And put my cheek against your cheek.

SAMANTHA: That's nice.

THEODORE: And just rub it so softly.

SAMANTHA: Would you kiss me?

THEODORE: I would. I'd take your head into my hands.

SAMANTHA: Keep talking.

This could be a steamy scene from an R-rated movie. It’s not. Samantha is a program, the “operating system” for Theodore’s computer. In the movie Her, director Spike Jonze paints an all-too-vivid picture of life with an artificially intelligent being.

We’ve met artificially intelligent beings before. In 1968 HAL, the soft-spoken and ultimately murderous shipboard computer of 2001: A Space Odyssey, seemed comfortingly far off. Samantha does not. Many of us hold her prototype in our pockets right now – just try asking Apple's Siri if she loves you.

Siri is not Samantha yet. But how long until someone like Samantha emerges from your computer screen?

So far predictions of human-like AI have been something of a soggy fireworks show. Ever since the 1960s round-the-corner claims have soared only to fizzle. But this time around the show is different. The new generation of smart machines carry a payload of processing power orders of magnitude larger than their predecessors. And they are driving brain-like programs that allow them to learn. The result? Machines are marching into territory that was once the preserve of humans. The latest can outdo us at recognising faces or deciphering CAPTCHAs, the distorted words that until now have been relied upon to distinguish humans from automated bots out to steal online passwords.

Elsewhere automated systems are driving cars, beating humans at game shows, consulting as cancer specialists and, of course, acting as pocket personal assistants. AI is already here – in bits and pieces. And far from being terrified of it, we love it. Her shows how our fictions have changed. HAL was a killer. The worst thing Samantha does is run off with another AI.

But as the bits and pieces come together, will we really see a human-like intelligence emerge?

Ray Kurzweil, an AI pioneer and futurist, famously predicted the “singularity” will be here by 2029. At this point computers reach human intelligence. And according to Kurzweil, the machines then start designing more advanced versions of themselves and life on Earth becomes as unknowable as the singularity beyond the event horizon of a black hole. He also believes uploading our minds into the hardware of machines is on the cards.

That future still sounds way-out. But the prediction of human-like intelligence by 2029 is starting to look downright conservative.

Credit: Michael Weldon/The Jacky Winter Group

People have dreamed of artificial intelligence for a long time. You could look back to Talos of Crete, a giant man of bronze who was the bodyguard for the mythical character Europa, or to 1920 when the term robot was coined in Karel Capek’s play Rossum’s Universal Robots, or to 1950 when Isaac Asimov began his I, Robot series. In one of his stories Asimov even imagined a human-robot love affair.

Engineers started taking machine intelligence seriously during World War II. Alan Turing’s computer allowed the British to crack the Enigma codes of Germans submarines while MIT mathematician Norbert Wiener used a computer to help aim anti-aircraft guns. The positions of the overhead bombers could be tracked by radar but the computer had to calculate where the plane would be some 20 seconds after firing.

Wiener hit on a thrilling discovery. The guns occasionally went into wild oscillations, a problem that could be rectified using “negative feedback”. Neurologists had recently proposed the same mechanism to explain how the brain controlled the movement of limbs, relying on feedback from sensory information.

To Wiener and his colleagues it seemed they had hit on the beginnings of a grand unified theory of intelligence. “We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the name cybernetics, which we form from the Greek … steersman,” they declared in 1948. It marked the birth of an idea that took hold and has never let go. “Cybernetics is not just another branch of science. It is an intellectual revolution that rivals in importance the earlier industrial revolution,” wrote Asimov in 1956.

That same year a leading group of computer scientists met for a two-month conference in Dartmouth. They included Marvin Minsky, John McCarthy, Claude Shannon and Nathaniel Rochester. Their goal? To lay down a roadmap for how to make a machine of human-like intelligence. To mark their endeavour they coined the term artificial intelligence, or AI.

“We propose that a two-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

Cracking AI proved more than a summer’s work, but the scientists’ fervour infected the US military and science funding agencies. It was the height of the Cold War and the Russians had a lead on America with the launch of the Sputnik satellite. The US defence department’s special research projects agency, then known as ARPA (now DARPA) saw AI as a way to get ahead. The first AI lab was established in 1959 at MIT under the joint leadership of Minsky and McCarthy. For more than a decade AI experienced a golden age. The funding flowed and there were exciting advances.

McCarthy wrote the versatile computer language LISP to tackle tasks more diverse than tracking aircraft or breaking codes. Machines learnt to assemble blocks, play table soccer and chess. Minsky predicted that by the end of the 1970s a machine would beat a human chess champion and have general human-like intelligence. And why not? They were already 1,000 times faster than humans at crunching numbers.

It didn’t happen. The 1970s came and went with no champion chess machine let alone a humanly intelligent one. Worse, at least for AI research, the Vietnam War ended and with it the defence department’s enthusiasm for blue-sky research. The golden age clouded over into an AI winter. Many researchers relocated to more pragmatic jobs in IT. “They had to get money from somewhere; they did applications people would pay for,” explains computer scientist and author Josh Storrs Hall. Hans Moravec, an AI and robotics researcher at Carnegie Mellon University, recalls “everyone was depressed”.

One of the triumphs of the symbolic logic approach was 'Shakey', an ungainly robot dubbed the 'first electronic person' by Life magazine. 

Yet many of the items set out on the original AI roadmap had been achieved. Besides computer languages, Minsky and colleagues conducted pioneering work on neural nets – circuitry modelled on the way the brain works. In a traditional computer circuit the output signal is instantly determined by the inputs. But a brain neuron waits until it has processed inputs from other neurons before sending on a signal. With experience, some inputs are weighted more highly than others. That intermediate processing step gives the neuron the chance to learn – to make decisions based on previous experience. Like the neurons in our brains, neural nets were designed to learn.

McCarthy meanwhile had set up his own AI lab at Stanford after he and Minsky had a parting of the ways. McCarthy believed the future of AI lay not with neural networks but with symbolic logic, in which machines are given explicit instructions.

One of the triumphs of the symbolic logic approach was “Shakey”, an ungainly robot dubbed the “first electronic person” by Life magazine. Shakey was developed at Stanford Research Institute (SRI), then part of Stanford. (Stanford and SRI broke off their relationship in 1970 in the face of anti-war protests since much of SRI’s funding came from the Department of Defense.) Fitted out with a camera, sensors and a wire antenna connecting it to a room-sized computer nearby, Shakey could solve problems such as how to push a block off a platform using a ramp. It was not much different to solving a mathematical equation. To shift the block, it needed to push the ramp to the platform, climb the platform and shove. The only difference was that the robot was acting it out it in the real world.

But Shakey was not only shaky but slow. And when it came to neural networks, as Minsky himself pointed out, they were very limited in what they could do. It turns out these early dreamers of AI had vastly underestimated the power and complexity of the living machine they sought to copy.

Credit: Michael Weldon/The Jacky Winter Group

Hans Moravec was one of the dreamers. The Austrian-born robotics engineer built his first robot when he was 10. As a PhD student at Stanford in the late 1970s he built a robot like Shakey that negotiated obstacle courses. Then he went on to Carnegie Mellon and in 2003 founded Seegrid corporation to commercialise his “free-ranging” robots. But back in the late 1970s in Stanford, Moravec became acutely aware of the mismatch between the human and robotic brains. True, machines could leave us in the dust when it came to crunching numbers. But navigating an obstacle course was another matter. It took his robot more than five hours to travel 30 metres. As Moravec put it, “computer intelligence was an illusion, and robots laid that illusion bare”. Therein lay a puzzle – how could a computer be a thousand times faster than a human at maths, but so hopeless at visual processing? Moravec turned to evolution for an answer to what has become known as “Moravec’s paradox”.

Most humans struggle with maths but we are able to navigate our visual environment without a thought. Yet vision takes enormous processing power – as Moravec discovered when trying to teach his robots to navigate. Much of our vision is devoted to recognising edges and movement. Moravec was teaching his robots to do similar things. But when he consulted brain researchers who were studying vision he learnt a sobering fact. It turned out that to mimic the edge and motion detection functions of the retina would require a million million instructions per second. His computer could deliver one million instructions per second. Mimicking a slice of retina would require a thousand times as much processing power as the entire Stanford computer room!

This calculation went some way to explaining why his robots were so agonisingly slow at navigating obstacles. It also led Moravec to ponder the differences between man and machine. Vision is crucial to our survival and we have evolved to excel at it, developing highly interconnected neurons (each is connected to 10,000 others) to do massive parallel processing that efficiently handles the complex task. But number crunching works best with linear processing and our brain architecture is poorly suited to it.

About a third of our cortex, the key processing region of the brain, is devoted to vision. But those neurons aren’t only used for vision, says Moravec. “Everyone thinks through problems visually – Kekulé [the 19th century German chemist] could see the structure of benzene.” Other parts of the brain seem to help out with thinking, too. “An enormous amount of processing involves the perceptual and motor circuitry – Einstein could ‘feel’ the formula.” (Not to mention many of us who talk with their hands and get their best ideas while walking). So by Moravec’s calculation, quite apart from seeing or moving, if machines were going to be able to think like humans they would need massive amounts of processing power.

Despite the gloom of many of his colleagues, Moravec was upbeat about the future of AI. Partly because he’s an upbeat sort of guy. But part of his optimism was inspired by reading an article in Electronics magazine by Gordon Moore (co-founder of Intel Corporation) in 1965. The article noted that the processing power of circuits was doubling every two years. In the 1980s Moravec started making some predictions about when robots might start reaching benchmarks on the evolutionary scale. Computers of the day were like slugs, taking five hours to cross a room. By the 1990s they were like small insects. By 2010 they should be more like guppies – the tiniest of fish. Moravec plotted these data points on a graph and extrapolated. By 2040 or 2050 a robot with human-level intelligence ought to emerge.

He published his ideas in two exuberant and controversial books, Mind Children in 1988 and Robot in 1998. They gazed into a future in which robot evolution vastly outpaced human evolution and yet were quite sanguine about the consequences. Moravec’s views haven’t changed. As he told me in August, “Look at the chain of organisms in the tree of life. It stops with humans but computers keep going.” He added, “no species lasts forever”.

Moravec said he had not talked to journalists for nearly a decade “partly to distance myself from futurism that might have frightened investors”. But he told me the point of his future-gazing was neither to titillate nor to terrorise. “I was working out my life’s game plan.” Moravec was going to make smart robots and the game plan gave him a schedule and marching orders. “I’m still sticking to it,” he said.

There’s a slight stigma attached to AI for its historically inflated
claims. 'Machine learning is a bit more modest'. 

Fast forward to 2014 and Moravec’s graph of robot evolution isn’t too far from the mark.

As Moore foretold, my laptop is processing at the rate of 100,000 million instructions per second. That is 100,000 times more powerful than Moravec’s 1980 machine. Moravec had predicted this would make my laptop as smart as a real mouse. But mammalian brains turned out to be more complex than Moravec thought. My computer is still probably only guppy-smart.

Nevertheless, even my iPhone, running at about a fifth the processing capacity of my computer, is powerful enough to run its star app – Siri. You may take Siri and her ilk for granted by now but step back a moment and consider her talents.

Siri can navigate the ambiguities of natural language. For instance if I ask her to “find me some great sci-fi flicks”, she’s back in a wink with, “I found quite a number of sci-fi movies. I’ve sorted them by quality”. She wasn’t waylaid by the multiple meanings of words such as flick or great, something her predecessors would get snared by. For instance, computers of the 1950s were known to translate the phrase “out of sight, out of mind” into “invisible idiot”.

But processing grunt does not account for all of Siri’s abilities. Siri actually learns.

These days people are more likely to label her talents as “machine learning” rather than artificial intelligence. There’s a slight stigma attached to AI for its historically inflated claims. “Machine learning is a bit more modest,” says Horst Simon, deputy director of Lawrence Berkeley National Laboratory (LBNL). Yet there’s no doubt machine learning has its roots in the golden age of AI. Machine learning relies on learning patterns from vast amounts of data, similar to the way human brains operate.

Neural networks are one of its major learning aids – one powerful new iteration goes by the name of “Deep Learning”. (The improvement on the 1970s version is that now the networks are layered, conceptually mimicking the arrangement of neurons in the brain cortex).

Siri herself is a direct descendant of the golden age. Though she has been Apple property since 2010 her name reflects her origins at the Stanford Research Institute, the same place where Shakey took its first steps. Like Shakey, Siri was funded by DARPA. Her origins lie in a 2003 research project named CALO for Cognitive Assistant that Learns and Organises. Its goal was not to develop a phone app but to provide military support. Described as the largest AI program in history it involved hundreds of AI experts who came together to take machine learning to new heights. As Bianca Bosker wrote in a 2013 article in the Huffington Post, “It also demonstrated that a machine could learn in real time through its lived experience, as a human being does.”

Siri understands me when I say “find me some great sci-fi flicks,” because she has been trained on a vast database of colloquial language and continually updates her knowledge. By contrast, in the 1950s they tried to teach computers rules to understand language, which was problematic. “If you try and teach logical definitions, you can’t. You have to do it the same way your mind does it – you remember every time you hear a phrase and you pick what you want,” explains Storrs Hall.

Machine learning has yielded some striking successes. For instance in 2011 Andrew Ng, director of the Stanford Artificial Intelligence Lab, collaborated with Google to hook up 1,000 computers to see how they would categorise 10 million images from YouTube. They came up with cats and human faces. This year two icons of human supremacy have toppled off the pedestal. According to an April report in ArXiv (a placeholder for research that is yet to be peer-reviewed and published) from Ian Goodfellow and colleagues from Google, machines have learnt to outdo us at ReCAPTCHA, the distorted text described by the authors as “one of the most secure reverse Turing tests…to distinguish humans from bots”. Another preserve of human superiority, facial recognition, seems to have toppled with a June report in ArXiv from Chaochao Lu and Xiaoou Tang at the Chinese University of Hong Kong, claiming that their algorithm GaussianFace outperforms humans for the first time.

IBM’s Watson is another stunning example of the successes of machine learning. In 2012 Watson beat the world’s reigning champions at Jeopardy, the game show in which contestants must find the “question” to an “answer”, sometimes with a rhyme. For instance the answer may be, It’s where Pele stores his ball. The right (rhyming) question: What’s a soccer locker?

Watson had to understand natural language (though it was typed in for him) and trawl a database of 200 million pages of text including all of Wikipedia. It helped that he could read at the rate of a million books per second. Now his phenomenal searching and analytical talents are being deployed to help oncologists choose the best treatment options for patients at Sloan Kettering Cancer Center and test out financial advice at two of the world’s biggest banks, CitiCorp and ANZ.

'It’s like 300 years ago at the dawn of the industrial revolution.
Back then, some folks could see where the steam engine was going.'

And in August researchers funded by IBM announced a radically new kind of computer chip. Computers have barely changed their basic architecture since John von Neumann designed them in 1945. The new chip, TrueNorth, has a more brain-like architecture and promises brain-like efficiency. Arrays of these chips could crunch vast amounts of data. But they could do it using a thousand-fold less power than Watson. TrueNorth and similar new designs have people excited. “We’ve yet to find out just what they can do. It makes me think back to the first digital computer. They were developed to predict the flight path of a missile. The inventors would have been surprised to see them now being used to play our music, organise photos, or online shopping,” says Horst Simon at LBNL.

The seeds of artificial intelligence appear to be sprouting. And with mega-companies such as IBM, Apple and Google rushing in to tend them it looks like another golden age has arrived. In the last year Google purchased a British AI company called DeepMind, among other acquisitions, and appointed Kurzweil as director of engineering to teach machines to “understand” what they read. “It’s like 300 years ago at the dawn of the industrial revolution. Back then, some folks could see where the steam engine was going,” says Storrs Hall. In a YouTube interview this March, Google cofounder Larry Page said:

“Our mission we defined a long time ago is to organise the world's information and make it universally accessible and useful. We really haven't done that yet. It's still very, very clunky.”

He showed a video on the capabilities of DeepMind, an AI program that teaches itself to play video games by watching them. “It’s learnt to play all these games with superhuman performance. We've not been able to do things like this with computers before.” Page noted that DeepMind was the brain child of Demis Hassabis, a computer scientist and neuroscientist. “I think we're seeing a lot of exciting work going on that crosses computer science and neuroscience in terms of really understanding what it takes to make something smart.”

Page could be speaking straight from the Minsky and McCarthy AI playbook.

So will we get to meet a Samantha in the next couple of decades, or less?

Credit: Michael Weldon/The Jacky Winter Group

One thing everyone agrees on: even machines as powerful as Watson still lack common sense. Henry Lieberman who works on AI at the MIT Media lab, an offshoot of the original Minsky lab, followed Watson’s progress on Jeopardy closely. He points out Watson makes mistakes because it lacks the sort of experience humans start gathering as babies – it does not know, for instance, that water is wet.

Lieberman gives the following example of one of Watson’s mistakes.

Clue: It was this anatomical oddity of US gymnast George Eyser….

Watson’s wrong answer: Leg

Correct answer: Missing a leg

Watson knew from its vast database that “leg” was an anatomical part associated with George Eyser. But it did not have the common sense to know that the leg per se was not the oddity. The oddity was missing the leg. But if we learn our common sense through experience, why couldn’t computers learn it too? That’s what Lieberman and his colleagues are doing now. “We’re trying to put together a knowledge base to help machines be more sensible in a range of capabilities.”

So computers can read a million books a second, can learn, and with new architectures may do it very much more efficiently. They may also learn some common sense. So then, what’s left?

Something big it turns out.

And that is the ability to generalise a theory from data. Kevin Korb, an AI expert at Monash University in Melbourne, explains: “Like Watson we search our databases and rely on prior knowledge. But we can also generalise. We know that for instance you can’t push a car with a piece of string even though we've never tried it.” In a sense every human being is a scientist, taking data about the world and building theories about how the world works. And so far no one has figured out how to teach a computer to do that.

“We don’t know the general code for learning. We’ve been struggling with that for 60 years.” Korb’s project involves teaching computers how to arrive at a general hypothesis about the atmospheric conditions that are likely to result in fog at Melbourne’s airports. “I want to understand scientific induction and then automate it. Like all AI researchers I have modest goals.” But how far away is such an achievement? Korb suggests 500 years, partly “to throw cold water on overheated enthusiasm”.

Hans Moravec told me that John McCarthy believed that nailing the general code for learning would be possible but would require two more Einsteins and three more Newtons. McCarthy did not believe it was a matter of more computing power. Moravec disagreed then, as he does now. Machine intelligence will emerge incrementally with increased computing power just as it did in the biological world, he believes.

Horst Simon disagrees. “This is a big debate. Why should something fancy emerge when you scale up? Right now I don’t see that.”

So take your pick from those who say the bits and pieces will lead to Samantha, and those who say the bits and pieces will just be bits and pieces – albeit very useful ones, that will end up being quite hard to distinguish from Samantha.

After all, as Moravec puts it, “the truth about AI is that no one really knows what they are talking about.”

This article is part of our special edition, Rise of the Robots. For more stories on AI and automation, click here.

Elizabeth Finkel is editor-in-chief of COSMOS.