Is Google's so-called "AI co-scientist" poised to revolutionize scientific research as we know it? Not according to its human colleagues.
The Gemini 2.0 based tool, announced by Google last month, can purportedly come up with hypotheses and detailed research plans by using "advanced reasoning" to "mirror the reasoning process underpinning the scientific method." This process is powered by multiple Gemini "agents" that essentially debate and bounce ideas off each other, refining them over time.
The yet-unnamed tool would give scientists "superpowers," Alan Karthikesalingam, an AI researcher at Google, told New Scientist last month. And even biomedical researchers at Imperial College London, who got to use an early version of the AI model, eagerly claimed it would "supercharge science."
But the superlative-heavy hype seems to be just that: hype.
"This preliminary tool, while interesting, doesn't seem likely to be seriously used," Sarah Beery, a computer vision researcher at MIT, told TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community."
In its announcement, Google boasted that the AI co-scientist came up with novel approaches for repurposing drugs to treat acute myeloid leukemia. According to pathologist Favia Dubyk, however, "no legitimate scientist" would take the results seriously — they're just too vague.
"The lack of information provided makes it really hard to understand if this can truly be helpful," Dubyk, who's affiliated with Northwest Medical Center-Tucson in Arizona, told TechCrunch.
Google's claims that the AI uncovered novel ways of treating liver fibrosis have also been shot down.
"The drugs identified are all well established to be antifibrotic," Steven O'Reilly at UK biotech company Alcyomics, told New Scientist last month. "There is nothing new here."
To be sure, the tool isn't without its potential advantages. It can parse through and pull from vast amounts of scientific literature in minutes, compiling what it finds into helpful summaries. That could be an amazing timesaver — if you can overlook the high likelihood of hallucinations, or made-up outputs, creeping into the work, a problem inherent to all large language models.
But that's not what Google is aiming for here; it's touting the AI model as a bonafide hypothesis-generating machine — something that can probe our understanding of a field with meaningful questions — not merely an automated research assistant. That's a very, very high bar. And more importantly, it's not something scientists are asking for.
"For many scientists, myself included, generating hypotheses is the most fun part of the job," Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself?"
"In general, many generative AI researchers seem to misunderstand why humans do what they do," Sinapayen added, "and we end up with proposals for products that automate the very part that we get joy from."
More on AI: LA Times Uses AI to Provide "Different Views" on the KKK
Share This Article