Social robots are coming
A new breed of automatons – friendly, engaging and even soft to touch – will soon be among us. And they are moving into the real world at remarkable speed, as Wilson da Silva discovers.
There is something unnerving about Geminoid F. She looks like a Japanese woman in her 20s, about 165 cm tall with long dark hair, brown eyes and soft pearly skin. She breathes, blinks, smiles, sighs, frowns, and speaks in a soft, considered tone.
But the soft skin is made of silicon and underneath it lies urethane foam flesh, a metal skeleton and a plastic head. Her movements are powered by pressurised gas and an air compressor is hidden behind her seat. She sits with her lifelike hands folded casually on her lap. She – one finds it hard to say “it” – was recently on loan to the Creative Robotics Lab at the University of New South Wales in Sydney, where robotics researcher David Silvera-Tawil set her up for a series of experiments.
“For the first three or four days I would get a shock when I came into the room early in the morning,” he says. “I’d feel that there was someone sitting there looking at me. I knew there was going to be a robot inside, and I knew it was not a person. But it happened every time!”
The director of the lab, Mari Velonaki, an experimental visual artist turned robotics researcher, has been collaborating with Geminoid F’s creator, Hiroshi Ishiguro, who has pioneered the design of lifelike androids at his Intelligent Robotics Laboratory at Osaka University. Their collaboration seeks to understand “presence” – the feeling we have when another human is in our midst. Can this sensation be reproduced by robots?
Velonaki has also experienced the shock of encountering Geminoid F. “It’s not about repulsion,” she says. “It’s an eerie, funny feeling. When you’re there at night, and you switch off the pneumatics … it’s strange. I’ve worked with many robots; I really like them. But there’s a moment when there’s an element of … strangeness.”
This strangeness has also been observed with hyper-real video or movie animations. It even has a name: “the uncanny valley”: it’s the sense of disjunction we experience when the impression that something is alive and human does not entirely match the evidence of our senses (see addendum below).
For all her disturbing attributes, Geminoid F’s human-like qualities are strictly skin deep. She is actually a fancy US$100,000 puppet, who is partly driven by algorithms that move her head and face in lifelike ways, and partly guided by operators from behind the scenes. They oversee her questions and answers to ensure they’re relevant. Geminoid F is not meant to be smart. She’s been created to help establish the etiquette of human-robot relations.
Which is why those studying her are cross-disciplinary types. “We hope that collaborations between artists, scientists and engineers can get us closer to a goal of building robots that interact with humans in more natural, intuitive and meaningful ways,” says Silvera-Tawil.
It is hoped Geminoid F will help pave the way for robots to take their first steps out of the fields and factory cages to work alongside us. In the near future her descendants – some human-like, others less so – will be looking after the elderly and teaching children.
It will happen sooner than you think.
Robots are about to become more commonplace,
with ‘social robots’ leading the way.
Rodney Brooks has been called “the bad boy of robotics”. More than once he has turned the field upside down, bulldozing shibboleths with new approaches that have turned out to be prophetic and influential.
Born in Adelaide, he moved to the US in 1977 for his PhD and by 1984 he was on the faculty at the Massachusetts Institute of Technology. There he created insect-like robots that, with very little brainpower, could navigate over rough terrain and climb steps. At the time the dominant paradigm was that robot mobility required massive processing power and a highly advanced artificial intelligence. Brooks reasoned that insects had puny brains and yet could move and navigate, so he created simple independent “brains” for each of the six legs of his robots, which followed basic commands (always stay upright irrespective of direction of motion), while a simple overseer brain coordinated collaborative movement. His work spawned what is now known as behaviour-based robotics, used by field robots in mining and bomb demolition robots.
But it is the work he began in the 1990s – developing humanoid robots and exploring human-robot interactions – that may be an even greater game changer. First he created Cog, a humanoid robot of exposed wires, mechanical arms and a head with camera eyes, programmed to respond to humans. Cog’s intelligence grew in the same way a child’s does – by interacting with people. The Cog experiment fathered social robotics, in which autonomous machines interact with humans by using social cues and responding in ways people intuitively understand.
Brooks believes robots are about to become more commonplace, with “social robots” leading the way. Consider the demographics – the percentage of working-age adults in the US and Europe is around 80%, a statistic that has remained largely unchanged for 40 years. But over the next 40 years, this will fall to 69% in the US and 64% in Europe as the boomers retire. “As the people of retirement age increase there’ll be fewer people to take care of them, and I really think we’re going to have to have robots to help us,” Brooks says.
“I don’t mean companions – I mean robots doing things, like getting groceries from the car, up the stairs into the kitchen. I think we’ll all come to rely on robots in our daily lives.”
It’s a self-reinforcing loop – as machines understand the real world better, they learn faster.
In the 1990s, he and two of his MIT graduate students, Colin Angle and Helen Greiner, founded iRobot Corp, maker of the Roomba robot vacuum cleaner. It was the first company to bring robots to the masses – 12 million of their products have been sold worldwide and more than 1 million are now sold every year.
The company began by developing military robots for bomb disposal work. Known as PackBots, they’re rovers on caterpillar tracks packed with sensors and with a versatile arm. They’ve since been adapted for emergency rescue, handling hazardous materials or working alongside police hostage teams to locate snipers in city environments. More than 5,000 have been deployed worldwide. They were the first to enter the damaged Fukushima nuclear plant in 2011 – although they failed in their bid to vent explosive hydrogen from the plant.
With the success of the Roomba, iRobot has since launched other domestic lines: the floor mopping Braava, the gutter cleaning Looj, and Mirra for pools. Its latest offering is the tall, free-standing RP-VITA, a telemedicine health care robot approved by the US Food and Drug Administration in 2013. It drives itself to pre-operative and post-surgical patients within a hospital, allowing doctors to assess them remotely.
Other companies have sprouted up in the past 15 years manufacturing robots that run across rocky terrain, manoeuvre in caves and underwater, or that can be thrown into hostile situations to provide intelligence.
Robot skills have grown by means of advances in natural language processing, artificial speech, vision and machine learning, and the proliferation of fast and inexpensive computing aided by access to the internet and big data. Computers can now tackle problems that, until recently, only people could handle. It’s a self-reinforcing loop – as machines understand the real world better, they learn faster. Robots that can interact with ordinary people are the next step. This is where Brooks comes in. “We have enough understanding of human-computer interaction and human-robot interaction to start building robots that can really interact with people,” he says. “An ordinary person, with no programming knowledge, can show it how to do something useful.”
In 2008 Brooks founded another company, Rethink Robotics, which has done exactly that – created a collaborative robot that can safely work elbow to elbow with humans. Baxter requires no programming and learns on the job, much as humans do. If you want it to pick an item from a conveyor belt, scan it and place it with others in a box, you grasp its mechanical hand and guide it through the entire routine. It works out what you mean it to do and goes to work. Baxter is cute too. Its face is an electronic screen, dominated by big, expressive cartoon eyes. When its sonar detects someone entering a room, it turns and looks at them, raising its virtual eyebrows. When Baxter picks something up, it looks at the arm it’s about to move, signalling to co-workers what it’s going to do. When Baxter is confused, it raises an eyebrow and shrugs.
Baxter, priced at an affordable $25,000, is aimed at small to medium businesses for whom robots have been prohibitively expensive until now. While robots are a big business today, generating $29 billion in annual sales, the market is still dominated by old-school industrial machines – disembodied arms reliant on complex and rigid programming. These automatons haven’t really changed much from those that began to appear on factory floors in the 1960s. They are stand-alone machines stuck in cages, hardware-based and unsafe for people to be around. Nevertheless, 1.35 million now operate worldwide, with 162,000 new ones sold every year. They’re used for welding, painting, assembly, packaging, product inspection and testing – all accomplished with speed and precision 24 hours a day. But Baxter and his ilk are starting to shake up the field.
“In the new style of robots there’s a lot of software with common-sense knowledge built in,” says Brooks.
Launched in 2012, Baxter is used in 18 countries with applications such as manufacturing, health care and education. Rethink Robotics’ backers include Amazon’s Jeff Bezos, whose own company is a big user of robots to handle goods in its warehouses. When Google revealed in December 2013 it had acquired eight robotics companies it sent a thunderbolt through the field.
Google created a division led by Andy Rubin, the man who spearheaded Android, the world’s most widely used smartphone software and who began his career as a robotics engineer. Only a month later, Google shelled out $650 million to buy DeepMind Technologies, a secretive artificial intelligence company in London developing general-purpose learning algorithms.
“As of 2014, things are finally changing,” says Dennis Hong, who heads the Robotics and Mechanisms Laboratory at the University of California in Los Angeles. “The fact that Google bought these companies shows that, finally, it’s time for the robotics business to really start.”
‘We’re entering a new era ... something we saw
with computers 30 years ago.’
Where does that leave social robotics? Since Baxter came on the scene, “everybody’s saying they’ve got collaborative robots,” chuckles Brooks. “But some of them are just dressed up, old-style interfaces. Industrial robots have not been made easy to use because it’s engineers who use them, and they like that complexity. We made them popular by making them easy to use.”
But as people and money flood into the field, artificial intelligence with social smarts is developing fast, says Brooks. Take Google’s self-driving car: the original algorithms were found to be useless in traffic. The cars would become trapped at four-way stop sign intersections because they couldn’t read other drivers’ intentions. The solution came in part by incorporating social skills into the algorithm.
Brooks hopes that Baxter will become smart and cheap enough that researchers will develop applications beyond manufacturing. Updates to its operating system already allow the latest model Baxters to be twice as accurate and operate three times faster than earlier models. Brian Scassellati, who studied under Brooks and is now a professor of computer science at Yale University, also believes robots are about to leave the factory and enter homes and schools. “We’re entering a new era ... something we saw with computers 30 years ago. Robotics is following that same curve.
“They are going to have a very important impact on populations that need a little bit of extra help, whether that’s children learning a new language, adults who are ageing and forgetful, or children with autism spectrum disorder who are struggling to learn social behaviour,” he says. In 2012, Scassellati’s Social Robotics Lab began a five-year, $10 million US National Science Foundation program with Stanford University, MIT and the University of Southern California to develop a new breed of “socially assistive” robots designed to help young children learn to read, overcome cognitive disabilities and perform physical exercises.
“At the end of five years, we’d like to have robots that can guide a child towards long-term educational goals ... and basically grow and develop with the child,” he says.
Despite the progress in human-robot interaction that has led to machines such as Baxter, Scassellati’s challenge is still daunting. It requires robots to detect, analyse and respond to children in a classroom; to adapt to their interactions, taking into account each child’s physical, social and cognitive differences; and to develop learning systems that achieve targeted lesson goals over weeks and months.
To try to achieve this robots will be deployed in schools and homes for up to a year, with the researchers monitoring their work and building a knowledge base. Early indications are that real gains can be made in education, says Velonaki. Another of her collaborators, cognitive psychologist Katsumi Watanabe of the University of Tokyo, has tested the interaction of autistic children over several days with three types of robot: a fluffy toy that talks and reacts; a humanoid with cables and wires visible; and a lifelike android. Children usually prefer the fluffy toy to start with, but as they interact with the humanoid, and later the android, they grow in confidence and interaction skills – and have been known to interact with the android’s human operators when they emerge from behind the controls.
“By the time they go to the android, they’re almost ready to interact with a real human,” she says.
‘People themselves need to be upgraded
so they can do something value-creating.’
The number one fear people have of smart, lifelike humanoid robots is not that they’re creepy – but that they will take people’s jobs. And according to some economists and social researchers, we are right to worry.
Erik Brynjolfsson and Andrew McAfee of MIT’s Centre for Digital Business say that, even before the global financial crisis in 2008, a disturbing trend was visible. From 2000 to 2007, US GDP and productivity rose faster than they had in any decade since the 1960s – yet employment growth slowed to a crawl. They believe this was due to automation and that the trend will only accelerate as big data, connectivity and cheaper robots become more commonplace.
“The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications,” they write in their book, Race Against the Machine.
Economic historian Carl Benedikt Frey and artificial intelligence researcher Michael Osborne at the University of Oxford agree. They estimate that 47% of American jobs could be replaced “over the next decade or two”, including “most workers in transportation and logistics ... together with the bulk of office and administrative support workers, and labour in production occupations”.
Perhaps unsurprisingly, the robot industry takes the opposite view: that the widespread introduction of robots in the workplace will create jobs – specifically, jobs that would otherwise go offshore to developing countries. And they may have a point.
In June 2014, for example, the European Union launched the $3.6 billion Partnership for Robotics in Europe initiative, known as SPARC. The EU calls it the world’s largest robotics research program and expects it will “create more than 240,000 jobs”.
Job growth is certainly happening at Denmark’s Universal Robots, which also makes collaborative robots. The company has grown 40-fold in the last four years, employs 110 people and is putting on another 50 in 2014. Its robots – UR5 and UR10 – look like disembodied arms with cameras attached. They are operated by desktop controllers and taught tasks using tablet computers. They are not as social as Baxter, but they are able to work alongside humans.
“The more a company is allowed to automate, the more successful and productive it is, allowing it to employ more people,” chief executive Enrico Krog Iversen told The Financial Times in May 2014. But the jobs they will be doing will change, he argues. “People themselves need to be upgraded so they can do something value-creating.”
That’s been true for some robot clients. Both Universal Robots and Rethink Robotics say customers have hired more people as output in small companies has increased.
Brooks believes the fear that robots are going to take away all the jobs is overplayed. The reality could be the opposite, he argues. It’s not only advanced Western economies that are faced with a shrinking human workforce as their populations age. Even China is facing a demographic crisis. The number of adults in the workforce will drop to 67% by 2050, he says. By the time we’re old and infirm, we could all be reliant on robots. Says Brooks, “I’m not worried about them taking jobs, I’m worried we’re not going to have enough smart robots to help us.
Addendum: Close encounters of the robotic kind
The term “uncanny valley” was coined by Japanese robot engineer Masahiro Mori in 1970. His thesis is that as a robot’s appearance is made more humanoid, our emotional response will become increasingly positive and empathic – but only up to a point. “To a certain degree, we feel empathy and attraction to a human-like object, but one tiny design change, and suddenly we are full of fear and revulsion,” Mori told The Japan Times.
Sony Pictures Imageworks encountered a similar issue with The Polar Express in 2004. Many critics said its mannequin-like human characters “gave them the creeps”. Some compared them to zombies. In a 2009 study cognitive scientist Ayse Pinar Saygin of the University of California in San Diego scanned the brains of 20 subjects who were shown 12 videos of a humanoid robot waving, nodding and taking a drink of water. Subjects were then shown videos of the same actions performed by a woman and then one of a stripped-down version of the automaton with all of its metal circuitry visible. The functional MRI scans showed a clear difference in brain response when the humanoid appeared: the brain areas that lit up included those believed to contain mirror neurons.
Mirror neurons are thought to form part of our empathy circuitry. In monkeys they fire when the animal performs an action and when it sees the same action performed by another. They are not yet proven to act the same way in humans.
Saygin says her results suggest that if something looks human and moves likes a human, or if it looks like a robot and acts like a robot - we’re unfazed. However, if it looks human but moves like a robot, the brain fires up in the face of the perceptual conflict, she says.