The notion of transparency is today central to policy-making and political discourse in democracies around the world. In the fields of robotics and artificial intelligence, however, it is sometimes considered a bothersome intrusion.
Algorithms employed in robotics and the complex calculations of machine-learning models often function as “black boxes”, where the precise mechanics of the internal processes are invisible to observers.
This is not because AI designers are inherently secretive, but because adding in features that produce transparency – “human interpretability”, in the jargon – inevitably adds to the cost and decreases the efficiency of the systems.
However, this lack of visibility is not value-neutral and urgently needs to be addressed at the highest levels, according to an opinion piece published in Science Robotics.
The piece, written by Sandra Wachter, of Oxford University, Brent Mittelstadt, of the Alan Turing Institute, and Luciano Floridi, of University College London, point to recent studies showing that AI “can make unfair and discriminatory decisions, replicate or develop biases, and behave in inscrutable and unexpected ways in highly sensitive environments that put human interests and safety at risk”.
As an example, they cite AI in self-driving cars that must decide in the event of a potential collision whether to hit a pedestrian, or swerve and thus kill or injure the driver.
They also point to the US-made Knightscope security robot, an autonomous unit programmed to patrol premises that is capable of alerting authorities, reporting and recording criminal activity.
The opaque nature of the robo-cop’s decision-making algorithm leaves it open both to suspicions of its use as a mass surveillance device, and the possibility that its actions – much like those of some flesh and blood US police officers – might not be free of racial bias.
On a much larger scale, too, the algorithms used to determine results in areas such as search engine queries, credit applications and security assessments remain largely opaque, despite the fact that the results can have catastrophic effects on individual lives.
Wachter and her colleagues review legislative attempts to regulate AI and robotics decision-making, including the US Equal Credit Opportunity Act, and the EU’s Data Protection Directive. The latter, significantly, guarantees people access to the “knowledge of the logic involved” in automated decisions that affect them.
They also review the different avenues for ensuring transparency – or, at least, accountability – in AI, which primarily divide into industry self-regulation and government legislation.
The EU, they note, is increasingly in favour of legislative remedies, However, the complex and frequently hidden nature of algorithms makes regulation profoundly difficult.
“The inscrutability and the diversity of AI complicate the legal codification of rights, which, if too broad or narrow, can inadvertently hamper innovation or provide little meaningful protection,” they write.
The authors note that, at present, discussions about regulation and legislation tend to differ depending on whether they are focussed on robotics – which involves hardware and engineering – or AI, which concerns software and programming. This separation, they suggest, is unhelpful and needs to be abandoned.
“It misinterprets their legal and ethical challenges as unrelated,” they write.
“Concerns about fairness, transparency, interpretability, and accountability are equivalent, have the same genesis, and must be addressed together, regardless of the mix of hardware, software, and data involved.”