Chat, Who's Gonna Win the Election?

Foolish Pollsters Are Now Just Asking AI What Voters Would Say in Response to Questions and Publishing It at Face Value

"Pure fictions are on the brink of being treated as scientific and political knowledge."
Victor Tangermann Avatar
Companies are releasing public opinion polling data that was generated by an AI, a trend that has researchers concerned.
Getty / Futurism

Last month, Axios was forced to issue a bizarre correction for a blog post about a growing maternal health crisis in the United States.

The story quoted new poll findings by a company called Aaru, representing them as research based on the feedback of American adults. But according to an editor’s note, the piece had to be “updated to note that Aaru is an AI simulation research firm.”

In other words, Axios had failed to disclose that it was citing alleged “polling data” that wasn’t drawn from human respondents at all. Instead, it was dreamed up by a large language model —yet the latest sign of every imaginable industry trying to leverage AI, even when doing so makes absolutely no sense.

As Digital Theory Lab director Leif Weatherby and University of California, Berkeley, computer sciences professor Benjamin Recht explain in a guest essay for the New York Times, the practice that tricked Axios is called “silicon sampling,” and it’s a recipe for disaster.

“The idea behind silicon sampling is simple and tantalizing,” they write. “Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use AI agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.”

If that sounds like vast overreach that could undermine the value of opinion polling itself, you may be correct. The data only has value “insofar as it summarizes the beliefs and opinions of actual humans,” as Weatherby and Recht argue. “Using simulations of human opinions in place of the real thing will only worsen our broken information ecosystem, and sow distrust.”

Pollsters have long relied on statistical models to make up for a relatively small pool of responses while addressing possible variables that could skew the data. After all, convincing people to answer questions on the phone or online isn’t exactly easy.

But making up responses wholesale using AI is obviously a terrible alternative, and can easily introduce biases and “influence public opinion itself, rather than merely to report what the public thinks,” as Weatherby and Recht warn.

Silicon sampling supercharges the trend by introducing biases of the AI models themselves. In a 2025 paper, researchers from Northeastern University found that silicon sampling are “generally not reliable substitutes for human respondents, especially in policy settings.”

“The models struggle to capture nuanced opinions and often stereotype groups due to training data bias and internal safety filters,” the paper reads. “Therefore, the most prudent approach is a hybrid pipeline that uses AI to improve research design while maintaining human samples as the gold standard for data.”

A separate paper by University of Bern psychology postdoc Jamie Cummins, which has yet to be peer reviewed, found that generating “silicon samples” involves making “many analytic choices” that could have a significant “impact on sample quality.”

Even a “small number of decisions can dramatically change the correspondence between silicon samples and human data,” Cummins found.

Despite these widespread concerns, Aaru,and other companies like it are raising hundreds of millions of dollars in funding, according to Weatherby and Recht, including partnerships with Stanford University and public opinion poll heavyweight Gallup.

It’s an alarming new trend, highlighting how AI tools continue to erode public trust by presenting often hallucinated fiction as fact. It’s especially concerning given its potential to sway public opinion with polls based on AI slop, further entrenching the values of AI models that have long been found to suffer from inherent biases.

“Pure fictions are on the brink of being treated as scientific and political knowledge,” Weatherby and Recht concluded in their essay. “If we do not pull back, our understanding of society might become artificial, too.”

More on AI slop: Wall Street Journal Editor-in-Chief Instructs Staff to Welcome AI Sloplords

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.