Chess Robot breaks child’s finger: "This is of course bad”

“The robot broke the child’s finger… this is of course bad”

When a chess-playing robot violated Asimov’s first rule of robotics – “A robot may not injure a human being” – by breaking a seven-year-old’s finger at the Moscow Open, the story tapped into deep-seated cultural ideas and fears about robots and technology.

“The robot broke the child’s finger,” Sergey Lazarev, president of the Moscow Chess Federation, told the TASS news agency.

“This is of course bad.”

The Guardian’s reporting of the incident suggested it was the child, rather than the robot, who had behaved unexpectedly. But people responding on social media were quick to attribute the machine with motive, with comments like: “Look, I get worked up over board games too,” or “Not now, chess robot uprising.”

Robert Sparrow, a professor of philosophy at Monash University, says this is because “Robots are often a way of telling stories about what it means to be human and about our fears of the future.”

“Situations where someone accidentally staples their finger or reaches past the guard on the machine and was injured like that, actually look pretty much the same,” he says. “It’s just that people, when they think about robots, think about machines with minds of their own.”

Sparrow says part of the problem is that most people’s knowledge of robots is derived from science fiction. As a result, they tend to overestimate the technology’s capabilities.

When someone encounters a robot they think it’s C-3PO or R2-D2 – but it’s actually more like a clock radio.

From chess robots to self-driving cars, safety is critical

Due to the way humans view and interact with these machines, and robots’ reliance on features like computer vision, big data and artificial intelligence, robotics poses a range of ethical and human rights issues. Safety – as highlighted by the chess incident – is key, along with concerns around privacy, discrimination and transparency.

There is also the broader effect on society and human relationships when robots take over tasks.

Reassuringly, Australia has frameworks and laws in place that can provide guidance to designers or assist when things go wrong.

“We want all the machines and systems that we interact with to be safe, to not be spying on us,” Sparrow says, “We want consumer rights in relation to these technologies.”

People notoriously over-trust robots and technology. “It’s called automation bias,” Sparrow says. If a person sees a machine working well 95% of the time, they assume it will always work well.

This becomes a problem in certain settings. Take driverless vehicles:

“If you’re driving along a freeway, the car seems to be driving itself,” Sparrow says, “So you fall asleep, or you start reading a book.”

“And then a kangaroo jumps out.”

“People, when they think about robots, think about machines with minds of their own.”

Robert Sparrow

In this example, he says, people are generally not ready to take back control.

To prevent further chess injuries, and more serious incidents like industrial accidents or crashes involving autonomous vehicles, Sparrow says there needs to be a cautious approach to the design of machines that interact with people.

When things go wrong, the responsibility can usually be traced back to a human.

In the case of the chess incident, responsibility could lie with the robot’s designer for failing to anticipate the range of human responses.

Cctv chess
Footage captured by CCTV of the Chess-robot incident. Credit: Telegram.

Or, as Sergey Smagin, vice-president of the Russian Chess Federation, implied – the child could be at fault, having violated the safety rules.

Maria O’Sullivan, an associate law professor and deputy director of the Castan Centre for Human Rights Law at Monash, says a key takeaway from the chess incident is that robots generally aren’t as smart as people assume, or as sophisticated as they appear.

“When you’ve got a human interacting with the robot, the robot is really simplistic and it doesn’t deal well with an unexpected event,” she says.

O’Sullivan says there are frameworks in place in Australia for when things go wrong with new technologies, like consumer laws that cover product liability and safety standards. Australia also has ethical frameworks for the design of artificial intelligence and technologies.

Ethics and the rise of robots

In 2021, the Australian Human Rights Commission released a report on technology and human rights outlining an approach to new and emerging technologies that is consultative, inclusive and accountable, with robust human rights safeguards.

The report made several recommendations including better transparency and legal accountability when governments or the private sector use AI technologies in decision making, and an independent safety commissioner to provide guidelines on best practice and monitor the technology’s use.

Australia has a voluntary AI ethics framework that outlines eight principles including: human, societal and environmental wellbeing; human-centred values; fairness; privacy protection; reliability and safety; transparency; contestability; and accountability.

O’Sullivan says while there can be benefits to new technologies like robots, there can also be unintended consequences and human rights implications.

For example, drones can be used for commercial purposes like deliveries or humanitarian tasks.

But when drones are used as autonomous weapons they lead to serious consequences – military drones are designed and deployed to kill.

O’Sullivan says many ethicists argue the use of such weapons could make war more inevitable, because there are fewer physical consequences for the aggressor: “It will mean that countries will go to war more frequently because they don’t have that problem with the body bags.”

Discrimination and privacy concerns can arise from the large data sources that robots rely on to operate, or might be collecting and transmitting as they work. Sparrow says the Roomba robotic vacuum cleaner “effectively produces a map of your house … the size of your house and where your furniture is. That information is commercially valuable.”

Manipulation, discrimination and disempowerment

Many robots draw on AI and machine learning technologies with the potential to repeat and even amplify bias. In 2018, Reuters reported Amazon stopped using an AI hiring tool because it was sexist and discriminated against women. The problem occurred because the tool was trained on resumes of previous successful candidates, who were mainly men.

Facial recognition technologies are beset with concerns about racial and gender bias.

As robots become more sophisticated and human-like, concerns about deception and transparency are emerging.

Social robots like a toy that says hello, or conversation-capable smart speakers like Siri or Alexa are designed to relate to and engage with people. They are becoming more common.

“People said, ‘oh that robot was a sore loser’ or ‘that robot got angry and lost its temper.’”

Maria O’Sullivan

But as Sparrow points out, these technologies merely imitate empathy or care, “when they’re actually just mining the internet for what people have said in the same situation.”

“A big part of designing these kinds of social agents is essentially manipulating the user.”

In the case of ex-Google employee Blake Lemoine, a chat bot had become so convincing the engineer believed it to be sentient.

O’Sullivan says sex robots are an extreme example of deception. These robots are purchased for both sex and companionship and are acting as a human substitute. In some cases sex robots are even programmed to exhibit emotions and even tell humans, “I love you.”

What happens if a human forms an emotional bond with the robot as a result, or mistreats a sex robot? This might raise questions about the role of consent, and what human-robot relationships might mean for human relationships more broadly.

For what it’s worth, the chess robot wasn’t passing itself off as human.

“It looked pretty rudimentary – you could tell it was all technical,” O’Sullivan says.

Indeed, video of the incident reveals the finger-breaking robot is essentially a large, disembodied robotic arm.

Even so, she says, the response on social media was immediately to imbue the machine arm with human-like tendencies.

“People said, ‘oh that robot was a sore loser’ or ‘that robot got angry and lost its temper.’”

Beyond important ethical issues like safety, privacy, discrimination and transparency, an overarching concern for Sparrow is the flow-on effect for human relationships and society. When robots take over tasks – reducing the level of human interaction and people’s overall sense of agency – the problem becomes a sense of disempowerment.

“When it comes to our technological future, people feel that they have no choice in the matter,” Sparrow says.

“You’re constantly told ‘Robots and AI are going to change everything’, and you’re just supposed to applaud.”

“Whereas if some politician said ‘Look, I’m going to change everything’, you would say ‘Hang on a minute – we want a vote.’”

Please login to favourite this article.