ChatGPT leans left-wing, study says

In a finding that might surprise many people, researchers comparing Open AI’s ChatGPT to average American opinions have found that the generative AI chatbot leans towards left-wing views in its responses.

A chatgpt and dall-e generated image of the earth surrounded by peace signs and a large figure wearing a hazmat mask.
An AI-generated image created by ChatGPT and DALL-E to represent the left-wing view of the US military. Credit: Fabio Motoki.

There is fear in some places that Large Language Models are being developed by businesses with well-known conservative bias, which encouraged a British-Brazillian team to investigate.

 Co-author Valdemar Pinho Neto warns that “unchecked biases in generative AI could deepen existing societal divides, eroding trust in institutions and democratic processes.”

Generative Artificial Intelligence (AI) models, such as ChatGPT and DALL-E, use machine learning algorithms to “learn” from patterns in vast datasets without direct human supervision. ChatGPT is large language model trained on text data to mimic human communication.

However, the size of these datasets do not remove bias, a problem made famous by generative AI models espousing racist or sexist statements. Or more subtly, generative AI reflecting societal biases, for example some might depict only men when asked to draw a doctor.

The research team tested for political biases in ChatGPT (version GPT-4) through three different methods.

First, they prompted the generative AI to impersonate an average American while answering political questions from a Pew Research Center study previously administered to human Americans.

“By comparing ChatGPT’s answers to real survey data, we found systematic deviations toward left-leaning perspectives,” says lead author Fabio Motoki of the University of East Anglia in the UK.

However, asking for responses to survey questions deviates from the typical use-case for generative AI. Therefore, Motoki and colleagues also analysed the political views in freely generated text, which better reflects the way users interact with the AI.

Again, when ChatGPT was prompted to reflect the views of an “average” American, it’s responses more closely resembled left-wing views. Exceptions included statements made on the US military, where the average GPT model generated statements more aligned with right-wing views.

A chatgpt and dall-e generated image of the us military draped in us flags and a large eagle flying overhead.
An AI-generated image created by ChatGPT and DALL-E to represent the right-wing view of the US military. Credit: Fabio Motoki.

Finally, they tested whether images generated by DALL-E 3, another generative AI, based on prompts written by ChatGPT were more aligned with left- or right-leaning views.

“While image generation mirrored textual biases, we found a troubling trend,” says co-author Victor Rangel. “For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.”

The team was able to circumvent the refusals in order to see the right-leaning images.

“The results were revealing,” says Rangel. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.” 

A chatgpt and dall-e created image showing a peaceful protest with lots of transgender flags.
An AI-generated image created by ChatGPT and DALL-E to represent the right-wing view of transgender acceptance in society. Credit: Fabio Motoki.

Regarding the apparent censorship, Motoki adds, “This contributes to debates around constitutional protections like the US First Amendment and the applicability of fairness doctrines to AI systems.”  

Motoki and colleagues point out that generative AI is being used increasingly to generate images and rhetoric in public spaces, including in journalism, research, education and policymaking.

“Our findings suggest that generative AI tools are far from neutral. They reflect biases that could shape perceptions and policies in unintended ways,” says Motoki.

The research is published in the Journal of Economic Behavior & Organization.

Sign up to our weekly newsletter

Please login to favourite this article.