Dr Nicole Robinson spends her days working with robots. But not just today’s robots – she is working with the robots of tomorrow, trying to help build and design them to adapt to, and work alongside, their human counterparts.
“The main objective of my focus is to build human-robot societies,” Robinson – who spends her time between Monash University’s engineering faculty and the Turner Institute for Brain and Mental Health – tells me.
So what would that look like?
“Autonomous cars driving around and having service robots delivering packages along the side of the road,” she says, by way of example. “It would be having robots in public spaces that can provide information points to people who are looking to find certain resources or someone to talk to about services in their local area.”
Although we all have smartphones in our pockets to give us information, robots have the added ability of physically being in the space. For example, a phone might be able to direct you to a location. A robot could physically take you there and then make you a cup of tea.
Along with these physical world applications, Robinson is also programming robots to work on their mannerisms.
“A day in my life of being a roboticist and behavioural scientist involves designing robot behaviours, testing out their actions in experiments to see how they perform, and thinking about ways that robots can be improved to work with humans,” she says.
“I also work on the ethics of human-centred robotics. This involves exploring issues such as how can we build robots that we can trust over time, and how do we design and use robots to support social-good causes.”
Fictionally, robots have been rooted into human consciousness since at least the 1920s, and from R2D2 to the Terminator, our hopes and fears of robots’ abilities have been very high. But despite incredible advances in the past few decades, and dedicated robotics researchers working their whole careers on these problems, there are still very few robots in our everyday lives.
And almost no robot has come with as much hype – and disappointment – as autonomous vehicles. In 2015, The Guardian told readers that from 2020 “you will become a permanent backseat driver”. It was also suggested that the greatest danger would be that the cars would be “too safe” – an ironic prediction considering safety has now become one of the industry’s biggest issues.
While our technology seems light years ahead – it can tell us directions, answer almost any question we want, and control our smart lights or alarms for us – moving this technology into a physical robot form, even something as seemingly simple as a driverless car, is harder than it sounds.
“There is a saying: what looks simple for people is probably really challenging for robots,” says Robinson.
A robot might take hundreds, or hundreds of thousands, of hours to learn how to pick up and manipulate an object, which small children can do almost instinctively.
This is the same with self-driving cars. Cars need to learn each individual road situation – from Canberra’s roundabouts to Melbourne’s dreaded hook turns. Only then can you start throwing pedestrians, drivers and many other distractions into the mix.
“Your car has to be pretty good at driving before you can really get it into the situations where it handles the next most challenging thing,” Waymo software engineer Nathaniel Fairfield – who has worked on self-driving cars since 2019 – told The New York Times.
“You have to peel back every layer before you can see the next layer.”
If we’re lucky, these problems might be fixed in the next decade or two: experts say we’re incrementally getting better at this type of software. But it’s probably worth noting that self-driving cars have had tens of billions of dollars of funding over the last few years, and we still haven’t arrived at the destination.
The technology used in self-driving cars is helpful for many other purposes. Using sensors to know where something is, and what it is, would be helpful for the robots delivering packages on the (much less dangerous) footpath, or for the robots helping you find your way through a museum. The tech itself is a boon for other robotics industries.
But for robots that use more than just wheels, there’s another very important problem: they just aren’t that good at walking or grabbing.
“Things that we really take for granted every day – like being able to walk upright on two legs without having to worry about balancing issues, or being able to pick up very fine objects like buttons – are really challenging for robots,” says Robinson.
This is called Moravec’s paradox. For a robot, mental tasks require relatively little computation, while more physical tasks such as sensing, perceiving and doing – things that humans barely register – take a huge amount of the same resources.
While walking on an uneven hiking track might make us more conscious of our footing, for a robot it’s almost impossible unless the whole route has been mapped out in advance. That’s not exactly ideal for a machine being sent into an unknown environment to, for instance, rescue someone from a fire or flood – although last year’s DARPA challenge showed what’s possible with a little time, money and ingenuity.
Boston Dynamics’ videos of their robots Atlas, Handle and Spot dancing, doing backflips or parkour are super cool, but each of those moves are programmed through a toolchain by the team – an impressive feat to be sure, but one that would fall apart if there was even a small footstool placed in the way.
Gripping and manipulating objects depending on their size, shape and fragility is an even bigger issue.
“There is a lot of learning approaches that are looking at trying to get robots to pick up and manipulate in a similar way to how humans would do,” says Robinson.
“But there’s still a lot of variations in the way that we pick up objects. We can pick it up from the top, we can pick it out from the side, we pick it up with a few fingers or all our fingers… As it stands, we’re still trying to solve the grasping problem.”
Work on these problems is advancing, as research published in Nature Communications last December demonstrated.
With all these issues, it seems like our phones might be the better solution after all, but Robinson is adamant that human and robot societies can be more than the sum of their parts.
Her research is trying to find “the right way to use robots to help people in their day-to-day life, provide beneficial services and create the next new wave of technology above computers and smartphones to boost our quality of life”.
Plus, by making robots seemingly more human, it allows you to ask some pretty interesting questions about how humans really work.
“Right now, researchers are also currently trying to find ways that robots can understand and respond to social interactions in an emotionally and socially intelligent way,” says Robinson.
“This is really exciting, because we are learning about how can we break down the human experience into components, and learning about what it really takes to be [a] socially and emotionally intelligent being.”
The ethics of this is more rigid than you might imagine. Some studies have shown that humans are easily persuaded or manipulated by social robots – it seems purely because they’re robots. Making sure machines have our best interests at heart is no longer just a question in the realm of science fiction.
“Social robots can be quite persuasive. Robots can sometimes encourage people to take unsafe and risky actions,” Robinson says.
“So, we need to make sure that people are working on programming safe and responsible actions into robots. This includes making sure that the robots’ intention and goal is made clear to the user. We also want to make sure that robots have good levels of software security to prevent any malicious attacks.”
Despite decades of science fiction helping us come to terms with a robot-human society, it would seem that both of us have still have a long way to go.