How to avoid AI dystopia

Artificial Intelligence (AI) has been lauded as the holy grail of human innovation, a potential solution for human problems ranging from the mundane to the absolutely astounding. But increasingly computer scientists are having to respond to demands from critics to strengthen the moral framework around the emerging technology, and to seek out a fairer future.

Questions such as “what if AI discriminates along social classes?” and “what if government-deployed machines are biased against certain groups?” are now being asked. One key problem is that these systems are developed by humans, and based on human-generated data. Their functionality is inherently dependent on the data a machine or network is trained on, and therefore by the societal biases manifest in their designers’ minds. There are always people behind the machine.

Ellen broad.

Ellen Broad.

Melbourne University Press

In Made by Humans, a new book by Ellen Broad, head of policy for Australian independent computing think tank, the Open Data Institute, many of the emerging issues around AI are brought to the fore. The book takes a look at the societal issues around creating and implementing AI, delving into serious problems such as fairness and openness, and how Australia’s federal government is already rolling out controversial machine-learning strategies.

“For sure, some AI projects have been really astonishing in their accomplishments,” says Broad. “But others have been really brittle in their assemblage. People have no way of knowing how the implementation of this intelligence will affect their lives, and one of the major problems is that most people can’t establish which is which – separating the good from the bad.”

Broad’s key argument is that the AI revolution is fundamentally flawed because AI itself is a human construct – made by humans, for humans, and based on data collected about humans. She believes this has serious potential for supporting biases in society and helping to feed discrimination.

“AI is flawed in the same way that humans are flawed,” she says. “It can’t help but learn from data which has been generated by people. There’s no alternative data, and that’s a really crucial problem.”

Broad argues that AI could fail at many tasks because it is difficult to collect an accurate database about anything, especially human activity. For example, data collected about us reflects our online behaviour, and even if our time spent online is increasing, datasets can neither reflect those subtle moments offline which really define who we are, nor our true daily thoughts which remain hidden even from those closest to us.

“A lot of the time algorithmic bias may not even be intentional,” she says. “It’s not like some Machiavellian force at work, but rather more likely because of inherent biases in the designers’ minds.”

“We have minorities and majorities in our societies, so there’s already a problem when designing AI. Who are we designing for?” she asks. “And even with data that is not about humans, like meteorology or chemistry, for example, the instruments used to generate the data have been created by humans.”

These inherent machine biases will lead to problems for underrepresented groups across societies, from the physically impaired and mentally ill, to minority nationals and women. AI can be misused or leveraged in any direction for whatever purposes its designer pleases.

According to Broad, the outputs of these systems can have a very real impact on people’s lives.

“At a policy level we’ll need to quantify the effects of the decisions that are being made,” she says. “The question is this: if we know that there is going to be a bias in an AI output, can we change the algorithms to encourage fairness, without having to change the underlying biases in society which inevitably exist?”

In Australia the federal government has begun to automate systems including debt recovery from welfare recipients, data-driven drug testing of welfare recipients, and tools to predict which detainees are most at risk of violence in detention centres. The government is also investing in a machine-learned national facial recognition database.

“Australia is experimenting with data in lots of ways,” says Broad. “National AI ranges from sophisticated statistical analysis to real machine-learning where computers are trained on government data.

“One controversial idea which has been proposed is to apply machine learning methods to wastewater data where trace amounts of methamphetamines are present, with the aim to target particular sites for drug users, and make decisions about social welfare.”

Throughout history governments have used data and technology to develop strategies against people in need, including minorities, the sick and the poor. In this way the most marginalised people are often the most negatively affected by technological change. So, as AI systems are increasingly rolled out across Australia and the rest of the world, more transparency will be required to remove prejudice and promote fairness and equality.

C80 090 zeitgeist 3

Broad believes that citizens need a way to be able to know how they are being assessed. “I think we should have transparency around the methodologies upon which AI systems are built, as well as the data they are trained on,” she says. “Citizens need a way to know about that.”

Regulations such as the recent General Data Protection Regulation (GDPR) introduced this year in Europe are a step towards online protection and privacy. But as for the regulation of machine learning and AI, legally binding mechanisms only exist peripherally. “We have human rights laws which restrict forms of discrimination,” Broad explains. “But what we don’t really have are purpose-built mechanisms for scrutinising AI systems. That’s partly because the technology has changed so fast.”

Time will tell whether we are able to build fair AI systems in the future, but serious challenges remain, some of which are out of the control of computer scientists. “There is no universal idea of fair since there are so many different perspectives across society: what’s fair for me may not be fair for you,” concludes Broad. “We need to work towards fairer societies.”

Made By Humans: The AI Condition, by Ellen Broad, is published by Melbourne University Press. RRP $29.99

Please login to favourite this article.