AI Bots Can Manipulate Online Polls, New Research Shows
The rise of artificial intelligence could pose a potentially disastrous threat to online polling.
That’s according to a new research published in the Proceedings of the National Academy of Sciences that lays out just how easily online survey research data can be manipulated by AI agents. The paper by Sean Westwood, an associate professor of government at Dartmouth University and the director of the Polarization Research Lab, underscores the emerging threat that AI poses to a pillar of modern data collection and public opinion research.
“We can no longer trust that survey responses are coming from real people,” Westwood said in a press release. “With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”
As part of the research, Westwood designed and tested “an autonomous synthetic respondent” – essentially an AI bot – “capable of producing survey data that possesses the coherence and plausibility of human responses.” That agent then completed surveys, evading detection by standard quality checks 99.8 percent of time.
According to Westwood’s paper, his synthetic respondent was able to avoid detection by the quality checks by simulating “realistic reading times calibrated to a persona’s education level,” generating “human-like mouse movements” and even answered questions by adding in “plausible typos and corrections.”
The problem, the paper argues: Existing fraud-detection measures in online survey research simply aren’t good enough anymore.
“The vulnerabilities demonstrated in this paper suggest that those using measures of user behavior or question-based countermeasures are fighting a losing battle,” Westwood writes.
To highlight just how susceptible political polling data is to AI interference, Westwood conducted a simple analysis of polls taken in the final week before Election Day 2024. What he found, according to the paper, is that between just 10 and 52 synthetic respondents could have swung the results of those polls to reverse which candidate was ahead.
Just 55 to 97 synthetic respondents would have been needed to swing the poll results outside of the margin of error, according to the paper.
Such skewed results might be written off as a fluke in a high-profile race like a presidential contest, where polling is frequent and commonplace. But such data interference could weigh more heavily when it comes to other issues that might not have as much polling behind them, Westwood writes.
“While a single survey with extreme results might be discounted during a high-information election, polling on other issues is often sparse, allowing individual polls to disproportionately influence media narratives,” the paper reads.
The research illustrates the emerging risks of AI at a moment when some pollsters and public opinion researchers are actively experimenting with and pursuing methods driven by AI. Even proponents of those methods have acknowledged concerns about the potential for bad actors to use AI bots to skew data.
“We have this thing, this reflection of the social fabric, that is polling,” Amir Kanpurwala, the co-founder of the quantitative research platform Outward Intelligence, told Campaigns & Elections earlier this year. “And we I think have faith, as a country, in that data, maybe not for individual polls, but at an aggregate level. So what happens when folks try to manipulate that? What can you believe? How can you be entirely sure that the data that you have is not being corrupted in some way or fashion?”
