At first they appear to be twins. The two Japanese men are clad identically in black, wear the same black rectangular glasses, the same stern expression and sport the same stylish mop of shiny jet black hair. Look again and you see one of them is doll-like, a robot.
Hiroshi Ishiguro, the director of the Intelligent Robotics Laboratory (IRL) at Osaka University, is well-known for posing with his android twin. It’s not just a weird publicity stunt; this might be the answer to Japan’s labour crisis. With its greying population – close to 28% of its 127 million people are aged over 65 – below-replacement birth rate and reluctance to ramp up immigration, Japan needs to make its own workers.
It already has plenty of industrial robots. But who will tend to the elderly in overflowing nursing homes and, perhaps just as important, who will make them feel cared for? That’s why Ishiguro’s lab has government funding to create ever-more human-like robots – indeed ,with US$5 million every year for five years, the project to create his autonomous humanoid, Erica, receives the largest grant from Japan’s Science and Technology Agency (JST).
Yet Ishiguro himself is a surprise. He doesn’t fit the stereotype of a roboticist, someone more in tune with machines than people. His first ambition was to become an oil painter, and he retains the artist’s basic impulse – to examine the human condition. Asked what drives his mission to build humanoid robots, he replies: “I want to understand what it is to be a human being.”
As artificial intelligence continues to develop “we will have to ask that question more and more”, agrees engineer Elizabeth Croft, who specialises in human-robot interaction at the University of British Columbia.
Others find Ishiguro’s work puzzling. “I don’t understand his scientific concept exactly,” says Alin Albu-Schäffer, director of the Institute of Robotics and Mechatronics at DLR, the German Aerospace Centre, but he adds: “I like it from a philosophical perspective. He’s at the extreme, and that provokes change.”
Ishiguro’s work lies somewhere between the practical and the weird. Plenty of places build humanoid robots but they are clearly mechanical representations of human-ness.
Ishiguro is actually trying to build a human. For him it is a way to tackle the mysteries of the human mind: intelligence and consciousness. “We can’t take an analytical approach to find out what a human is,” he says. “We need to take a constructive approach.”
Ishiguro, now 54, switched from painting to programming at university and was soon drawn to robotics. “I saw that AI needs to have a body,” he tells me at a conference in Melbourne, “because a computer needs to have its own experiences.”
While AI has progressed in leaps and bounds in recent years, it is still enormously challenging to create robots that can manoeuvre themselves in our messy ever-changing world as opposed to the uniform conditions of a factory floor. The Google-built AlphaGo software can beat the world Go champion but robots don’t stand a chance at beating a team of kids in a game of football.
Robotics companies everywhere are grappling with the challenge. DLR has Justin, who is handy with tools. Honda has Asimo, who can serve drinks. Rethink Robotics has Baxter, who can pass things to a co-worker and whose flat-screen eyes show where its attention is. Boston Dynamics has Atlas, whose latest trick is backflips. No one, though, could mistake these robots for a human. “They are much more R2-D2 than C-3PO,” Croft says.
Most robot makers deliberately keep their creations robot-like. This reflects two guiding principles.
One is to steer well clear of the ‘uncanny valley’ – the creepy feeling when you see almost-but-not-quite human characters in computer games or animations. The other, Albu-Schäffer says, is that the large gap in robot vs human intelligence and autonomy should be reflected in the design – “the appearance should reflect the robot’s stage of evolution”.
Ishiguro has headed in the opposite direction, plunging headlong into the uncanny valley.
His Geminoid series of robots are his trademark. The first, made in 2002, was a twin of his five-year-old daughter. Repliee Q1 (2005) was the twin of a Tokyo newsreader. Geminoid H1 (2006) was Ishiguro’s twin. Geminoid F (2010) was modelled on a woman in her twenties (whose identity Ishiguro won’t divulge).
The idea behind making a copy of a real human, Ishiguro says, was to transfer the presence, the sonzai-kan, of that person to the robot. “I focused on human likeness because that’s an extreme goal of robotics,” he tells me. “In a first contact, people will be surprised, but it’s easy to adapt.”
These hyperreal replicas have employed the latest that silicone technology and muscle-like fine-motor circuitry (actuators) can offer. But they are less robots than puppets, their speech and movements controlled by someone sitting at a keyboard.
One of Ishiguro’s key goals is for the humanoids to convey emotion. “When we feel emotion that’s when we begin to make a connection,” he says, “and we forget about the status of the partner.”
To impart expressiveness to the robots, Ishiguro turned to a master of the art – playwright and director Oriza Hirata, a champion of realism (or ‘quiet drama’) in Japanese theatre. With motion detectors attached to his face, Hirata modelled the gestures Ishiguro wanted his humanoids to express.
The collaboration led to the Robot Theatre Project, which has staged plays around the world. In these performances computer-controlled robots fill in for human actors, delivering pre-recorded lines and choreographed movements.
The company’s repertoire includes Sayonara, a play written by Hirata where an android (played by Geminoid F) tries to console a girl suffering from a fatal illness until its own mechanics go awry. In I, Worker a robot maid loses its motivation to work. The double bill toured North America in 2013. The robot theatre has also performed Franz Kafka’s Metamorphosis and Anton Chekhov’s Three Sisters. A planned performance of Jean Paul Sartre’s No Exit for a major French arts festival in 2015 was cancelled after Sartre’s estate refused permission for robot actors.
Hirata has provided the emotional X-factor to many of Ishiguro’s creations. “We call it the Oriza filter,” says the roboticist. It’s a codified and programmable pattern based on the director’s utterances and expressions: a movement of the body and hands, then the eyes, then the head, then an utterance after a 0.2 second delay. “If we apply the Oriza filter,” Ishiguro says, “our robots become so human-like.”
This choreography of conversation is very consistent between people, he says – so much so that he describes a patent based on Oriza’s movements as “how to represent human likeness”.
This is where we get into fuzzy territory. No one knows how to create a human mind. Its fundamentals – consciousness and intelligence – elude even definition, let alone replication.
But while some of Ishiguro’s humanoids grow ever more expressive and human, others have developed in the opposite direction.
I shriek in horror when Ishiguro shows me the humanoid he has developed for elderly people with dementia. It resembles a thalidomide child with half arms ending in nubs and a torso without legs. “It’s a bit creepy,” Ishiguro admits, “but this works very well.” These ‘telenoids’ have been used in more than 70 hospitals in Japan, he says, as well as in Denmark, Germany and Austria.
Ishiguro shows me a movie clip of an elderly Japanese lady hugging a telenoid and chatting to it as she might with a favourite grandchild. By being so stripped down, genderless and ageless, “demented people can use their own imagination; they don’t feel any pressure,” he explains. For similar reasons, he says, the telenoids have also worked very well for children with autism.
A more diminutive variation is the Hugvie – a soft, huggable robot you can put a phone into. “It allows you to feel the presence of a person while you are talking [to them],” Ishiguro says. He shows me another video, of a room of noisy kindergarten kids who immediately quiet down when their Hugvies start talking to them.
No doubt the ability of these stripped-down humanoids to fulfil basic emotional needs also says something about what it means to be human.
“I think Erica is the most beautiful and most human-like autonomous android in the world … I hope.” This is how Ishiguro describes Erica in a video produced by The Guardian last April.
To me, Erica is disconcerting. It’s not that her pearly silicone skin and features are all that life-like; or that when she speaks her lips move up and down in a doll-like way. But when Etienne, a visitor to Ishiguro’s lab in Osaka, talks to her, things get uncanny.
Erica turns her head towards Etienne, her eyes focusing on his. “Hello there,” she says. “May I ask your name?” Etienne, he tells her. “It’s nice to meet you, Etienne,” she responds. “So,” – she nods and pauses – “what country are you from?” South Africa, Etienne tells her. “Oh really,” she exclaims, shrugging her shoulders. “I’ve never been to South Africa but I did love the film Chappie, which was made in South Africa. I think it raises some questions about artificial consciousness, and Chappie is very cute.”
Erica’s ability to track Etienne during the conversation comes courtesy of two in-built 16-channel microphone arrays, 14 infrared depth sensors and the ability to move her head 20 degrees. She cannot move her arms or legs – yet. Her expressive gestures – blinking, shoulder shrugs, head turning and an upward look with her eyes at pensive moments – have clearly been run through the ‘Oriza filter’. She also has facial-recognition capability and memory, so she knows when she has spoken with someone before, and can refer to past conversations.
But is this evidence for the workings of a mind? Her architect, Dylan Glas, suggests it is: “For about two years now I’ve been working with Erica to create her mind, her personality and get all the details working.”
This is where we get into fuzzy territory. No one knows how to create a human mind. Its fundamentals – consciousness and intelligence – elude even definition, let alone replication. “Nobody can define human intelligence,” Ishiguro tells me emphatically. “That is one of our final goals, to understand what human intelligence is.” He is equally adamant that no one is close to creating a human-like artificial intelligence.
He describes the likes of AlphaGo as having “insect-level intelligence”. Machine-learning algorithms learn winning patterns from vast data sets. AlphaGo, for instance, learned from 30 million moves by grand masters, and then from millions more by playing against itself. Ishiguro is unimpressed: “A human never does that; if we did we would get old and pass away before learning anything.”
The ability to learn patterns from data sets means AIs can recognise voices, faces and key words, and respond with material in their memory. Like Siri, Erica recognises key words, finds matches in her memory and answers with programmed responses. Erica is also good at faking. She can keep the conversation going even when it goes off-script. “Respond, acknowledge, pivot; it’s the same trick I occasionally used with talking to my grandma,” quips Croft.
“People think giving robots intentions and desires means they will take over the world,” says Alin Albu-Schäffer. “We just want them to load the dishwasher.”
But there is something more to Erica – the beginnings of something that is distinctly human. “Erica has simple intentions and desires that control the behaviour,” Ishiguro says. “That is the main difference to Siri.”
Intentions and desires! It sounds scary – surely the first step towards robots taking over the world. But many roboticists think it is a necessary next step. “If we want robots to serve humans in the home,” says Toby Walsh, an AI expert at the University of NSW, “we will need them to have intentions and desires.”
Consider loading a dishwasher. Step-by-step instructions won’t cut it, explains Albu-Schäffer. A robot needs to recognise all kinds of objects under different lighting in different kitchens, retrieve them from odd positions, open a dishwasher door and finally stack dishes in an effective fashion: “We can’t describe this at the level of equations; this kind of planning and knowledge of environment is something we assimilate throughout our lives.” It is something, Albu-Schäffer jokes, his 18-year-old son has yet to master.
In robotics-speak “intention and desire” is what robots need to carry out such missions. From intention and desire come reasoning, planning and action. “People think giving robots intentions and desires means they will take over the world,” says Albu-Schäffer. “We just want them to load the dishwasher.”
So what sort of intentions and desires does Erica have? “In her current implementation she wants to talk, she wants to be well-recognised and she wants to take a rest,” Ishiguro says.
And Erica’s mind? Ishiguro says it is more in the mind of the beholder. He acknowledges that scientifically, “no”, she does not have a mind, but “for visitors, she does”. It is the sonzai-kan created by her beautiful silicone face, Hirata’s theatrical moves and the autonomous conversation.
“This is the Turing test after all,” says Walsh, referring to computing pioneer Alan Turing’s proposal that the true test of artificial intelligence is to pass for a human in conversation. Turing envisaged only text-based dialogue; Ishiguro’s humanoids use their bodies to enhance the illusion. “We’re being fooled by machines that have almost no intelligence,” Walsh notes. {%recommended 6508%}
Japanese culture is fascinated by robots. “Unlike North Americans,” Croft says, “the Japanese don’t seem to have the same problem with the uncanny valley.” Commentators often point to Shinto to explain Japan’s comfort with mechanical people. This animist religion, which ascribes souls to inanimate objects like trees or stones, plays a strong role alongside Buddhism in Japanese culture.
“That’s the reason we are so good for robots,” says Ishiguro. “We don’t care about flesh bodies to define a human.” He hopes that “people will accept Erica as some type of human”.
But his ultimate goal remains to understand what it is to be a human, especially his own consciousness.
His painting, these days with watercolours, seems to be pursuing that goal. Equipped with palette and brush, he is struggling to convey the sense of presence that objects have. How, for instance, does his consciousness perceive the presence of a chair?
“If I can represent my consciousness on the painting,” he says, “I don’t need to develop any more robots. I can go back to art.”