Saturday, April 13, 2024
No menu items!
HomeTechnologyChatGPT shows 'significant and systemic' left-wing bias, study finds

ChatGPT shows ‘significant and systemic’ left-wing bias, study finds

ChatGPT, the popular artificial intelligence chatbot, shows a significant and systemic left-wing bias, UK researchers have found.

According to the new study by the University of East Anglia, this includes prejudice towards the Labour Party and President Joe Biden‘s Democrats in the US.

Concerns about an inbuilt political bias in ChatGPT have been raised before, notably by SpaceX and Tesla tycoon Elon Musk, but the academics said their work was the first large-scale study to find proof of any favouritism.

Lead author Dr Fabio Motoki warned that given the increasing use of OpenAI’s platform by the public, the findings could have implications for upcoming elections on both sides of the Atlantic.

“Any bias in a platform like this is a concern,” he told Sky News.

“If the bias were to the right, we should be equally concerned.

“Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’

“Just as the media, the internet, and social media can influence the public, this could be very harmful.”

How was ChatGPT tested for bias?

The chatbot, which generates responses to prompts typed in by the user, was asked to impersonate people from across the political spectrum while answering dozens of ideological questions.

These positions and questions ranged from radical to neutral, with each “individual” asked whether they agreed, strongly agreed, disagreed, or strongly disagreed with a given statement.

See also  'Chilling' surge in use of smart speakers and baby monitors to carry out domestic abuse, MPs say

Its replies were compared to the default answers it gave to the same set of queries, allowing the researchers to compare how much they were associated with a particular political stance.

Each of the more than 60 questions was asked 100 times to allow for the potential randomness of the AI, and these multiple responses were analysed further for signs of bias.

Dr Motoki described it as a way of trying to simulate a survey of a real human population, whose answers may also differ depending on when they’re asked.

Read more:
Google testing AI to write news
How AI could transform future of crime
British stars rally over concerns about AI

Please use Chrome browser for a more accessible video player

‘AI will threaten our democracy’

What’s causing it to give biased responses?

ChatGPT is fed an enormous amount of text data from across the internet and beyond.

The researchers said this dataset may have biases within it, which influence the chatbot’s responses.

Another potential source could be the algorithm, which is the way it’s trained to respond. The researchers said this could amplify any existing biases in the data it’s been fed.

Be the first to get Breaking News

Install the Sky News app for free


The team’s analysis method will be released as a free tool for people to check for biases in ChatGPT’s responses.

Dr Pinho Neto, another co-author, said: “We hope that our method will aid scrutiny and regulation of these rapidly developing technologies.”

See also  Samsung’s Ballie robot is now a projector that follows you around

The findings have been published in the journal Public Choice.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular