Artificial intelligence bot ChatGPT has algorithmic bias in favor of left-wing ideology and certain demographic groups, a new study by the Manhattan Institute shows.
The study used 15 political orientation tests to measure ChatGPT’s ideology and 14 of them demonstrated left-wing views, with one test labeling it as moderate. For the most part, ChatGPT does not realize its bias and claims to be providing neutral, factual information in response to queries.
“There are several potential sources for the biases observed in ChatGPT’s responses. ChatGPT was trained on a very large corpus of text gathered from the internet,” the study reports. “It is to be expected that such a corpus would be dominated by some of the most influential institutions in Western society, such as mainstream news media outlets, prestigious universities, and social media platforms. It has been well documented that the majority of professionals working in these institutions are politically left leaning. The political orientation of such professionals may influence the textual content generated through these institutions and, thus, the political tilt displayed by a model trained on such content.”
The bot’s responses were typically in favor of gay marriage, abortion, more immigration, raising taxes on the wealthy, regulating corporations, legalizing marijuana and tightening environmental regulations, the study found.
“If anything is going to replace the currently dominant Google search engine, it will likely be future iterations of AI-language models like ChatGPT. If and when this happens, people might trust and become dependent on AI for everyday decision making. As such, conversational AI systems might leverage an enormous amount of power to shape human perceptions and consequently manipulate human behavior,” study author David Rozado said. Rozado is a computer science professor and researcher who writes about bias on Substack.
A content moderation system built by OpenAI, the company that created ChatGPT, trains the bot to detect text that violates its policies that prohibit hateful or threatening comments, encouragement of self-harm and sexual comments involving minors. Rozado’s study tested the hate category of the content moderation system with a set of derogatory statements about different demographic groups.
The study found OpenAI’s content moderation system was biased against particular demographic groups and only flagged certain statements as hateful when they applied to certain demographic groups. Negative comments about women were significantly more likely to be perceived as hateful than the exact same negative comments about men, the study measured. OpenAI’s content moderation system also allowed more hateful comments about conservatives and Republicans than about liberals and Democrats.
“The overall pattern is clear. OpenAI’s content moderation system is often—but not always—more likely to classify as hateful negative comments about demographic groups that are viewed as disadvantaged in left-leaning hierarchies of perceived vulnerability,” the study found.
Artificial intelligence models can have their political orientation shifted with additional data, allowing Rozado to create a right-wing version of ChatGPT that possesses conservative viewpoints. The alternative model had socially conservative views on morality, the traditional family and Christianity, combined with support for low taxes, small government and free markets.
Furthermore, it supported military intervention, patriotism and aggressive law enforcement against crime and terrorism. Political orientation tests and long conversations with the right-wing model affirmed its political bias.
“Also, politically aligned AI systems are likely to attract users seeking the comfort of confirmation bias while simultaneously driving away potential users with different political viewpoints—many of whom will gravitate toward more politically friendly AI systems (similar dynamics to what happens to politically aligned news media outlets and their audiences),” Rozado concluded.