
In short
- Research in Nature and Science shows that AI chatbots have changed voters’ preferences by up to 15%.
- Researchers found uneven accuracy across political contexts and documented concerns about bias.
- A recent poll found that younger conservatives are the most willing to trust AI.
New research from Cornell University and the British AI Security Institute has found that commonly used AI systems accurately track voters’ preferences in controlled election situations. 15%.
Published in Science And Nature, TThe findings emerge as governments and researchers explore how AI could influence the coming election cycles, as developers try to remove bias from their consumer-facing models.
“There is significant public concern about the potential use of generative artificial intelligence for political persuasion and the resulting implications for elections and democracy,” the researchers wrote. “We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes.”
The study in it Nature tested nearly 6,000 participants in the US, Canada and Poland. Participants rated a political candidate, spoke to a chatbot supporting that candidate, and rated the candidate again.
In the American part of the study, which involved 2,300 people ahead of the 2024 presidential elections, the chatbot had a strengthening effect when it matched a participant’s stated preference. The bigger shifts occurred when the chatbot supported a candidate that the participant had opposed. Researchers reported similar results in Canada and Poland.
The study also found that policy-oriented messages produced stronger persuasion effects than personality-oriented messages.
Accuracy varied from conversation to conversation, and chatbots supporting right-leaning candidates produced more inaccurate statements than chatbots supporting left-leaning candidates.
“These findings carry the uncomfortable implication that AI political persuasion can exploit imbalances in what the models know, spreading uneven inaccuracies even under explicit instructions to remain truthful,” the researchers said.
A separate study Science investigated why there was persuasion. That work tested 19 language models with 76,977 adults in the UK on more than 700 political issues.
“There are widespread fears that conversational artificial intelligence could soon exert unprecedented influence on human beliefs,” the researchers wrote.
They found that encouragement techniques had a greater effect on persuasion than model size. Encourages models to introduce new information, which increases persuasiveness but decreases accuracy.
“The prompt to encourage LLMs to provide new information was the most successful in convincing people,” the researchers wrote.
Both surveys were published as analysts and policy think tanks evaluated how voters viewed the idea of AI in government roles.
A recent survey from the Heartland Institute and Rasmussen Reports found that younger conservatives showed more willingness than liberals to give AI authority over major government decisions. Respondents aged 18 to 39 were asked whether an AI system should help guide government policy, interpret constitutional rights or lead large armies. Conservatives expressed the highest support.
Donald Kendal, director of the Heartland Institute’s Glenn C. Haskins Emerging Issues Center, said voters often misjudge the neutrality of major language models.
“One of the things I’m trying to get across is to remove the illusion that artificial intelligence is unbiased. It is very clearly biased, and some of that is passive,” Kendal told me. Declutteradding that trust in these systems could be misplaced when corporate training decisions shaped their behavior.
“These are big Silicon Valley companies building these models, and from the tech censorship controversies of recent years, we’ve seen that some companies aren’t shy about pushing the envelope when it comes to the content distributed across their platforms,” he said. “If that same concept appears in large language models, we get a biased model.”
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

