Lab for Business

The More Scientists Work With AI, the Less They Trust It

Numbers are down across the board.
Joe Wilkins Avatar
A preliminary report shows that researchers' confidence in AI software dropped off a cliff over the last year.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Scientists are a skeptical bunch — it’s in the job description. But when it comes to AI, researchers are growing increasingly mistrustful of the tech’s capabilities.

In a preview of its 2025 report on the impact of the tech on research, the academic publisher Wiley released preliminary findings on attitudes toward AI. One startling takeaway: the report found that scientists expressed less trust in AI than they did in 2024, when it was decidedly less advanced.

For example, in the 2024 iteration of the survey, 51 percent of scientists polled were worried about potential “hallucinations,” a widespread issue in which large language models (LLMs) present completely fabricated information as fact. That number was up to a whopping 64 percent in 2025, even as AI use among researchers surged from 45 to 62 percent.

Anxiety over security and privacy were up 11 percent from last year, while concerns over ethical AI and transparency also ticked up.

In addition, there was a massive drop-off in hype compared to last year, when buzzy AI research startups dominated headline after headline. In 2024, scientists surveyed said they believed AI was already surpassing human abilities in over half of all use cases. In 2025, that belief dropped off a cliff, falling to less than a third.

These findings follow previous research which concluded that the more people learn about how AI works, the less they trust it. The opposite was also true — AI’s biggest fanboys tended to be those who understood the least about the tech.

While more studies are needed to show how widespread this phenomena is, it’s not hard to guess why professionals would start to have doubts about their algorithmic assistants.

For one thing, those hallucinations are a serious issue. They’ve already caused major turmoil in courts of law, medical practice, and even travel. It’s not exactly a simple fix either; in May, testing showed that AI models were hallucinating more even as they technically became more powerful.

There’s also the tricky issue of AI as a tool for profit. Experts say that users roundly prefer confident LLMs to ones that admit when they can’t find data or deliver an accurate answer — even when that information is totally made up. If a company like ChatGPT were to stamp out sloppy hallucinations for good, it would scare off users in droves.

So if you’re not sure what to make of all the AI hype, ask a researcher — chances are, they’ll be happy to burst your bubble.

More on AI: AI Chatbots Are Becoming Even Worse At Summarizing Data

Joe Wilkins Avatar

Joe Wilkins

Contributing Writer

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.