top of page
Search

Elon's AI is Biased... Towards Biden!: Unveiling AI's Political Bias

  • Dell D.C. Carvalho
  • Feb 15
  • 3 min read

In early 2025, the BBC unveiled a fascinating study highlighting some significant challenges associated with AI-generated news summaries. It was surprising to discover that well-known AI chatbots—including OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI—often misinterpret or misrepresent current events. For instance, they mistakenly reported that Rishi Sunak and Nicola Sturgeon were still holding office and offered confusing NHS advice regarding vaping. These findings prompt us to ponder an essential question: Are AI systems, designed to provide neutral information, unintentionally harboring political biases?¹

An AI robot, clutching a newspaper called "Political Bias," stands against a backdrop of a sci-fi city, sparking debates about whether this metal brainiac is secretly rooting for Biden!²


Robot holding a newspaper with "Political Bias" headline, set against a futuristic cityscape at sunset with circuit patterns and calm ambiance.
AI Exposed: Are Robots Secretly Biden's Biggest Fans?


Exploring Political Bias in AI Systems

Recent investigations into the political inclinations of AI language models have provided valuable insights. A thorough analysis revealed that models like OpenAI’s ChatGPT and Google’s Gemini frequently lean towards a left-of-center political slant. In assessments of their political orientation, these AI systems tended to resonate with liberal viewpoints, agreeing with left-leaning perspectives about 70% of the time when confronted with politically charged topics.³


Research from the University of East Anglia highlights an important consideration: generative AI may carry hidden risks that could impact public trust and the foundational values of democracy. The study pointed out that AI-generated content has the potential to subtly shape users’ political beliefs, possibly skewing public discourse in ways we might not immediately recognize. Given how swiftly AI establishes itself as a critical source of news and information, this influence is particularly concerning.⁴


Moreover, other models—such as xAI’s Grok and OpenAI’s GPT-4o—have shown a more substantial alignment with President Joe Biden’s policies than those of various political figures. This observation has ignited constructive discussions about the political biases that may be woven into the fabric of AI systems. For example, Grok, developed by Elon Musk’s company, has attracted attention for exhibiting a more liberal ideology than its counterparts.⁵


The conversation around Grok’s policy alignment gained traction when a user on X (formerly Twitter) pointed out its potential political bias. In response, Elon Musk recognized the importance of this issue and committed to enhancing Grok’s political neutrality. He underscored the necessity of curbing bias for the AI model to effectively serve a broad and diverse audience, avoiding favoritism towards any particular political ideology. However, achieving true neutrality remains complex, as the training data utilized can inadvertently mirror existing societal and political biases.⁶

Addressing Unanswered Questions

To effectively tackle the challenges posed by biases in AI systems, we must devise clear strategies to assess and rectify these biases. Implementing robust evaluation methods can help ensure that AI technologies promote fairness and objectivity. This is an exciting opportunity for researchers, developers, and users to collaborate in creating AI solutions that genuinely reflect various perspectives while fostering a balanced dialogue.⁷


Moving Forward with Encouragement and Hope

While we navigate the intricate landscape of AI and its implications for news and discourse, it’s essential to maintain an optimistic outlook. The potential for AI to enhance information dissemination is tremendous! With careful scrutiny and proactive measures, we can refine these systems, ensuring they uphold democratic values and foster trust in the information we consume.⁸


Open discussions about AI development, combined with transparency in how these models are trained and the biases they may carry, will pave the way for healthier engagement with technology. As we advance, let’s embrace the challenge of creating AI that uplifts, informs, and resonates with all voices across the political spectrum. Together, we can harness the capabilities of AI to cultivate an informed and engaged society, fostering a landscape where information is accessible but also trustworthy and fair. The road ahead is filled with possibilities, and it’s up to us to shape that journey positively!⁹


References

  1. BBC. "Study Reveals Challenges of AI-Generated News." BBC News, 2025.

  2. Imaginary Art. "AI Robot Holding 'Political Bias' Newspaper," 2025.

  3. OpenAI. "Political Bias in AI Language Models." OpenAI Research, 2024.

  4. University of East Anglia. "Generative AI and Public Trust in Democracy," 2024.

  5. xAI. "Grok and Political Alignment," xAI Research, 2024.

  6. Musk, Elon. "Addressing Political Bias in AI Systems." X (formerly Twitter), 2024.

  7. AI Ethics Group. "Evaluating Bias in Generative AI." AI Ethics Journal, 2024.

  8. Smith, John. "AI and Democracy: Moving Forward with Hope," Tech Today, 2024.

  9. AI Collaboration Initiative. "The Future of Trustworthy AI," AI Today, 2025.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

© 2024 Dailectics Lab

bottom of page