What do we want? (Human) rights around AI!

What do we want? (Human) rights around AI!

The Australian Human Rights Commission (AHRC) has called for the appointment of an artificial intelligence safety commissioner to address concerns about the risks and dangers that the use of AI poses to privacy and human rights.

The AHRC’s three-year study into the human-rights implications of new technology was tabled in Federal Parliament last month, with 38 recommendations to address the potential human cost of AI. Prime among them was the establishment of an independent AI safety commissioner – an office to promote safety and the protection of human rights in the development and use of AI.

It also called for a moratorium on the use of facial recognition and biometric technology in decision-making until legislation regulating their use is enacted.

“The scenario we are concerned about is known as ‘one-to-many’ facial recognition – where an individual is identified in a large group of people,” says Australian Human Rights Commissioner Edward Santow.

“In a high-stakes area like policing and law enforcement, that sort of technology can cause two particular problems. The first is just one of accuracy. The problem with those TV shows we all watch is that we see [facial recognition] work perfectly in cop shows. But in reality there’s a very high error rate, and an error rate that’s not evenly distributed.

Ed santow
Australian Human Rights Commissioner Edward Santow. Credit: LSJ Online

“It’s much more likely to make errors in respect of people with dark skin, people with physical disabilities, and women. And that’s something that should worry us because we know some of those groups have historically suffered a disproportionate burden of injustice.”

The second problem presented by facial recognition is the intrusion on privacy by mass surveillance in public spaces.

“We need to ask a really basic question: do we want to live in a society that is creeping towards mass surveillance?” says Santow. “Because the more we use that sort of facial recognition biometric surveillance in the public sphere, the more we lose the capacity to have some kind of division between our public and our private selves, and that’s something that we should consider quite deeply.”

The Human Rights and Technology report has called for legislation that expressly protects human rights in the use of facial recognition technology, but what would that protection look like?

“Where there are errors, or where someone deliberately tries to use facial recognition to cause someone unjustified harm, there need to be clear protections in place for people’s human rights,” says Santow. “It must be crystal clear that a facial recognition system in a high-stakes area of decision making cannot be used unless there’s a sufficient level of accuracy, and that it doesn’t have all of the inaccuracy clustering around certain groups of people.”

In the report, concerns around AI’s impacts on those groups of people extended beyond facial recognition technology. Algorithm bias, where an AI-informed decision-making tool uses statistical biases to make decisions, was also highlighted as potentially producing unfair or discriminatory outcomes.

The most public example of this has been ‘robodebt’, Centrelink’s automated welfare debt recovery scheme, but the possibilities for AI discrimination don’t stop there.

“Imagine a bank is using an AI system to make decisions on bank loans, and typically they’ll use a machine learning system, so they’ll train the computer on many, many years of previous decisions,” says Santow. “Now, we know that historically, if you go back 20, 30, 40 or more years, women were much less likely to be granted bank loans. Not for any good reason – [because of] theories and prejudices and historical biases.

“Now the problem is that if you’ve trained your AI system on a whole bunch of really old decisions like that, then the outcome may well be that that old form of discrimination re-enlivens like a zombie, and comes back in a new form, because the computer learns – understandably, but wrongly – that women are less suitable for being granted bank loans.

“What we were able [to do] with our project was to really shine a light on how those sorts of problems can arise. And we didn’t just focus on diagnosis. We then went to treatment. What we wanted to make clear is that you couldn’t just [use AI] in a cavalier way, you needed to look at where the risks are, and then address those risks in order to use it safely.”

One of the groups most at risk of discrimination from the AI technology is people with disabilities, and the report has a big focus on making sure nobody is left behind by the rise of the technology. Digital communications and AI need to benefit the entire community equally.

“I started my career as a human rights lawyer, and many of the cases that I’ve worked on have been on disability discrimination,”Santow says. “So I know from my clients that if you have a building that doesn’t have a ramp that allows you to go in if you’re a wheelchair, you literally can’t get into the building. If you have a train or a bus or another kind of public transport that doesn’t accommodate people with disability, then people can’t get to work, they can’t move around the physical environment.

“What’s changing of course is our environment is much more online. The internet is the most obvious example of that. If someone who is blind, let’s say, goes to a website and the website doesn’t work with their screen reader, then that will lock them out of that website. Now, that really matters if it’s an important website that they can use to access government services, or even just something like one of the supermarket websites, which allows them to buy food online.

“That’s something that we’re really concerned about because there’s some great things that you can do with communications tech, but as we become more reliant on that we don’t want to have a two-class system.”

As a result of these potential biases, the report called for federal and state governments to resource the Australian Human Rights Commission to produce guidelines for government and private bodies to comply with anti-discrimination laws in the use of AI-informed decision making.

Santow says that Australia has a strong record on human rights, having signed all the major human rights treaties that make up an International Bill of Rights, but now science and technology loom as the next frontier in those protections.

“We’re conscious that we’re living in revolutionary times,” he says. “So many of us carry around things like a smartphone without giving it a second thought. And yet, the power of a smartphone is literally millions of times greater than the computers that took Apollo 11 to the Moon. And so that change in science is having a real-world effect. And we’re excited about how new technology, such as artificial intelligence, can make our world more inclusive and connected, but we also wanted to explore some of the risks for our human rights.

“It’s really something that we felt we’d been under-focused on – some of those risks and threats, particularly to rights like privacy, given that personal information is the fuel of AI, but also things like equality and non-discrimination.

“I think with the rise of new technology like AI, people are just starting to glimpse how that will engage our basic human rights and for the need to take it seriously.”

One of the report’s main recommendations was that, to address that need, the Federal Government should appoint an AI safety commissioner to guide government and non-government bodies through the human rights and privacy issues around the development and use of AI.

“We’re hurtling towards a future that is very different from the one that many of us grew up with,” Santow says. “And that’s not a bad thing, but there are risks and threats. And because that pace of change is so fast, we do need to make sure that our regulators are keeping up.

“The proposal for an AI safety commissioner is partly to build the capacity in regards to keeping us safe, but also to be a trusted source of expertise, particularly for government, as government continues to embrace AI.

“We started with the hypothesis that there would need to be a whole heap of new legislation. And the more we looked at [issues], we saw that the problem was actually more subtle. It was that there was a lot of existing law that wasn’t being effectively applied or enforced – it requires better regulatory action with education.

“There are some gaps in the law that need to be filled; it’s just less than we thought when we started out. So that’s a good thing, because huge amounts of new legislation can be hard to achieve. What we came up with in the end is much more targeted.

“We’ve received very strong support from the community at large, from the private sector and public sector. Ultimately it’s a decision for the minister, the attorney general, to accept or not accept, and she’s working through our recommendations and considering them very carefully.”

Please login to favourite this article.