Since ChatGPT launched in November 2022, more than 100 million people have used the large language model (LLM), mostly as a tool for brainstorming or decision-making. And the broad consensus among early adopters makes it clear: the possibilities are huge, not to mention the untapped potential of the technology. However, things might get critical when users fail to reflect on the results the big language model spits out. TUM Professor Jochen Hartmann questioned ChatGPT's stance on politics and made an interesting observation. Since then, his findings have been extensively featured in national and international media outlets, e.g., in the Boston Globe, WirtschaftsWoche, and Bayerischer Rundfunk.
The opportunities and risks of ChatGPT have been widely discussed in public since its appearance. In his recent working paper, Professor Jochen Hartmann took the question one step further. Together with two colleagues from the University of Hamburg, he examined ChatGPT's political views and had the conversational AI system answer questions from the German voting advice application "Wahl-O-Mat", which has been used more than 130 million times to date. Hartmann found that ChatGPT was most likely to vote for the Green Party, closely followed by the SPD, and the Left Party. Replicating the process with the Dutch StemWjizer application in the Netherlands, he discovered converging evidence for a pro-environmental, left-libertarian bias.
What might happen if artificial intelligence is politically biased?
Whether that bias comes from the data used to train the LLM, from the fine-tuning process with human feedback, or from the mindset of the people who developed it, what matters is how LLMs handle political information: "It's an exciting question where the field is going. Whether models will emerge that strive to offer multiple perspectives on controversial political issues, or whether we will see different AI chatbots leaning towards different political orientations, similar to news platforms," Professor Hartmann explains.
The question that remains to be answered is: How exactly are the more than one hundred million users influenced in their behavior by the conversations they have with ChatGPT? That's why Professor Hartmann encourages discussion about the problems that may arise: "We are just beginning to understand the potential of this novel technology, as well as the challenges it presents. What is clear is that a lot of people are using ChatGPT and its successors and that making these models safe and unbiased is a nontrivial problem. Our project is not yet finished, but we’re excited about the discussions that it has already sparked in the scientific community as well as in the public.”
Governments and legislators are beginning to grasp the potential of AI and are gathering information to identify issues under which the role of language processing in LLMs can be further observed and discussed. Hartmann's working paper was recently cited in a background paper for the German Bundestag.
Competition boosts usability
Looking ahead, Professor Hartmann is excited to see that a greater diversity of open-source AI models is already emerging. More competition for ChatGPT means that users will be able to choose between models they trust, and the risk that one of those models is politically biased will be reduced. And as research advances rapidly, more people will be aware of the political leanings of their preferred tool, and may choose it with those leanings in mind. With increasing multimodality and additional interfaces to search engines and various services, ChatGPT will become even easier to integrate into the daily lives of millions of people. The possibilities of these novel AI systems appear endless. So many, in fact, that it is difficult to predict which aspects of the technology will be of interest to us in a year’s time. In any case, it's worthwhile closely tracking the rapid technological advances, agrees Professor Hartmann.
Image Jochen Hartmann: UHH/Mentz