Explainer: Unethical AI and what can be done about it

Italy bans ChatGPT over privacy concerns. An Australian whistle-blower threatens to sue for defamation after the chatbot falsely describes him as a perpetrator in the scandal he helped uncover. A Belgian man dies by suicide after confiding in an AI chatbot.   

“Every day, there seems to be headlines about some new thing that raises interesting ethical questions,” says Dr Nick Schuster, an AI ethicist at ANU’s Humanising Machine Intelligence.

“ChatGPT just seems to be the latest headline maker.”

AI technologies are raising ethical questions with pressing social importance, Schuster says. And his concerns aren’t limited to ChatGPT.

“There are other things on the horizon. Self-driving cars are a really vivid example of the ways that artificial intelligence could be really destructive.” 

With a steady stream of new unethical AI case studies cropping up, Cosmos asked leading AI experts where and why things go wrong, and what – if anything – can be done about it.

Problems by design

Professor James MacLaurin says many problems flow from the “relentlessly probabilistic” nature of AI systems, and the unpredictable nature of their outputs.

The co-director of the centre for AI and public policy at the University of Otago, NZ, says new generative AI models base their future predictions or outputs from past data, whether that’s text, audio, video or some other form of training input.

“There’s an obvious problem with this”, he says. “The past is not a happy place.”

A famous example, he says, is Amazon’s hiring algorithm, which the company built to assess and rank CVs of job applicants.  The system was biased against women, because it was trained on the CVs of current and previous employees of the company who were overwhelmingly male. 

But even when the bias was identified it proved difficult to overcome. With the system making inferences about gender from choice of words, college names, or chosen sports. The bias was so ingrained, Amazon ultimately abandoned it.

Humans also rely on the past to make future predictions, MacLaurin says. “That’s standard inductive reasoning.” But people are also capable of critical thinking, which the AI machines are not. 

A further problem – relating to large language models – is the systems are not actually designed to provide truthful or accurate outputs, he says.

“Things like ChatGPT, are designed to be conversationalists. The highest value isn’t truth, it’s something like plausibility.”

The uncontrolled output from AI systems can lead to unintended consequences.

This was the case with an AI advertising system used by retail chain Target in the US, MacLaurin says. The system was designed to detect things about people in order to tailor and serve them advertising and marketing content.

“It detected that a young woman was pregnant, and started serving up ads that it thought were appropriate to somebody who was pregnant or about to be a young mother. Her parents, who didn’t know she was pregnant, saw the ads.”

“Things like ChatGPT, are designed to be conversationalists. The highest value isn’t truth, it’s something like plausibility.”

Professor James MacLaurin

Where issues arise: data, algorithms and applications 

Schuster says ethical problems arise across three key elements of AI systems – the datasets they are trained on, the algorithms themselves, and their applications.

Major problems crop up due to insufficient diversity in training data, he says. 

“If you’re training an AI system to recognise faces, but most of the faces that you’re training them on are the faces of white men, it’s going to have difficulty identifying other groups of people at a high rate of accuracy.”

Then there’s the way the algorithms make predictions or inferences from that data. Here, Schuster says, the use of AI in predictive policing – targeting particular areas or groups of people based on crime statistics – highlights the problems.

Even if you had data that was non-biased (which is generally not the case, he adds) “it’s still wrong, for instance, to predict somebody’s likelihood of criminality based on a factor like their race or their postcode.

“If you’ve been over-policing an area, historically, the predictive system is most likely just going to tell you to continue over-policing in those areas,” he says.

And then there’s the applications. The way these systems are used which can raise concerns, like the AI-based recommender systems on social media designed to maximise engagement which can have negative consequences for people’s mental health. 

The big four risks: privacy, fairness, accountability, and transparency

Professor Tim Miller, the co-director of the Centre for AI and Digital Ethics at the University of Melbourne, lists the main risks of AI as privacy, fairness, accountability, and transparency. 

When it comes to privacy, he says “probably most people don’t understand just how much data organisations have on you and how much they’ve tried to infer from it.”

Many countries, including Australia, have privacy laws governing how people’s data can be collected and used. Italy’s move to ban ChatGPT stems from concerns the system doesn’t comply with European data protection laws, he says. 

A non-profit research organisation in the US, the Centre for AI and Digital Policy, recently filed a complaint with the Federal Trade Commission, citing similar concerns.

Fairness relates to bias, ensuring algorithms aren’t making discriminatory decisions. Stanford University’s AI Index says that as large language models grow in size they become more capable, but also often more biased too.

While issues like privacy and fairness have received the most attention to date, transparency and accountability are also important, Miller says. 

Transparency means that, when a machine learning algorithm is being used, people can understand the data behind it, how it was collected and where the AI decisions come from. Accountability ensures when things go wrong, someone can be held responsible.

Posts by US designer Jackson Greathouse Fall recently went viral when he gave GPT-4 a budget of $100 and asked it to make as much money as possible.

But, Miller asks, what happens when an AI model like ChatGPT gives really bad financial advice? 

In Australia, a person has to have a qualification or license to give financial advice. And in some cases the financial adviser can be held accountable. But in the case of an algorithm the line of responsibility is unclear, Miller says.

How do we protect the vulnerable?

For Erin Turner, CEO of the Consumer Policy Research Centre, (CPRC) transparency is her overriding concern. 

Without transparency, she says, “it’s really hard to know even when this technology is being used, sometimes by businesses, what it’s being used for, how it’s set up, and how they’re testing to see if it’s delivering a fair outcome. 

“We’ve got two layers of problems,” she says. “What is even happening? It’s very, very hard to know. And then; is that okay?” 

Turner says businesses already collect masses of data on Australian consumers, who have no real control over what personal information is collected, and how it’s used, stored or shared. 

“Our data is hoovered up, we’re asked for all sorts of information, or information is inferred about us through our habits and our browsing behaviour, that’s then used — sometimes against us — to sell us more things to get us to pay a higher price,” she says.

Research by the CPRC shows 79% of Australians want businesses to only collect the basic data needed to provide a product or service, nothing more, Turner says. The same proportion don’t want their data shared or sold under any circumstances, she says.

Schuster says those already discriminated against, are most likely to be negatively impacted by AI systems.

When AI is used to evaluate applications for jobs, credit or social services, “it’s really important that those automated systems do things fairly,” he says.

“Our data is hoovered up, we’re asked for all sorts of information, or information is inferred about us through our habits and our browsing behaviour, that’s then used — sometimes against us — to sell us more things to get us to pay a higher price.”

Erin Turner, CEO of the Consumer Policy Research Centre

“I don’t worry too much about myself as an English speaking, middle class, American white guy. I think that these technologies are largely built by people like me, for people like me.

“I’m much more concerned about anybody who’s not in that culturally dominant group – how their interests can get left behind, washed out and marginalised in ways the people who are designing and implementing these systems are not very sensitive to.”

Are there any solutions?

Miller says there’s a lot of work underway trying to address gaps in law and policy, flagging the European Union’s proposed AI Act.

Turner says, in Australia a good place to start is the Privacy Act, placing limits on the data businesses are allowed to collect and use in the first place.

“As we speak, the review of the Privacy Act is underway. And for me, this is one of the most important foundational reforms that we need to see […] So making sure we get that right, I see, is really the first step. And then we’re going to get into big discussions around AI strategy and protections there,” she says.

MacLaurin says AI companies have tried to prevent their systems serving up biased, rude or dangerous content by adding filters. But these trust and safety layers are not impregnable, he says.

“There’s a sort of arms race going on between hackers and trolls and all sorts of people who are working out ways to turn the trust and safety layer off.” 

While he is generally positive about AI’s future, MacLaurin admits he signed the petition calling for a six-month moratorium, along with the likes of Steve Wozniak, co-founder of Apple, and Elon Musk. 

Policy moves slower than computer science and needs time to catch up, he reasons.

“People shouldn’t be too downhearted. But we do need to focus on thinking about the ethics, getting policy in place, and thinking about how to do this fairly.”

Please login to favourite this article.