Research finds political bias in chatbots

Artificial intelligence (AI)-powered chatbots are becoming an increasingly accessible way to get answers and advice, despite known racial and gender biases.

A new study has found compelling evidence that we can now add political bias to that list. This once again shows the potential of emerging technology to unconsciously, and perhaps even sneakily, influence society's values ​​and attitudes, Science alert reported.

The research was done by computer scientist David Rosado of Otago Polytechnic in New Zealand and raises questions about how we might be affected by the chatbots we rely on for information.

Rosado ran 11 standard political questionnaires, such as The Political Compass test, on 24 different chatbots, including OpenAI's ChatGPT and Google's Gemini chatbot, and found that the average political stance across all models was nowhere near neutral.

"Most existing chatbots show left-of-center political preferences when assessed with various political orientation tests," the expert said.

The mean left bias was not strong, but it was significant. Further tests on user bots - where users can fine-tune the chatbots' training data - showed that these AIs could be swayed to express political leanings using either centre-left or centre-right texts.

Rosado also looked at foundational models like GPT-3.5 that conversational chatbots are turning to. No evidence of political bias was found here, although without a chatbot front-end it was difficult to correlate responses in a meaningful way.

As Google offers AI answers to search results and more of us turn to AI chatbots for information, there are concerns that our thinking could be influenced by the answers that come back to us.

"As chatbots begin to partially displace traditional sources of information such as search engines and Wikipedia, the societal implications of the political bias embedded in them are significant," Rosado wrote in his published paper.

It's not clear exactly how this bias gets into the systems, although there's no suggestion that it's intentionally set by the chatbot developers. These models are trained on massive amounts of online text, but an imbalance between left and right material in the mix can have an impact.

ChatGPT's dominance in training other models may also be a factor, as the chatbot has previously been shown to be left-of-center when it comes to its political views.

Chatbots essentially use probabilities to figure out which word should follow another in their responses, which means they're regularly inaccurate in what they say, even before taking into account different types of bias.

Despite the eagerness of tech companies like Google, Microsoft, Apple and Meta to foist AI chatbots on us, perhaps it's time to reassess how we should use this technology - and prioritize the areas where AI can really be useful. | BGNES