OpenAI is bragging that its forthcoming models are so advanced, they may be capable of building brand-new bioweapons.
In a recent blog post, the company said that even as it builds more and more advanced models that will have "positive use cases like biomedical research and biodefense," it feels a duty to walk the tightrope between "enabling scientific advancement while maintaining the barrier to harmful information."
That "harmful information" includes, apparently, the ability to "assist highly skilled actors in creating bioweapons."
"Physical access to labs and sensitive materials remains a barrier," the post reads — but "those barriers are not absolute."
In a statement to Axios, OpenAI safety head Johannes Heidecke clarified that although the company does not necessarily think its forthcoming AIs will be able to manufacture bioweapons on their own, they will be advanced enough to help amateurs do so.
"We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with."
The OpenAI safety czar also admitted that while the company's models aren't quite there yet, it expects "some of the successors of our o3 (reasoning model) to hit that level."
"Our approach is focused on prevention," the blog post reads. "We don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards."
As Axios notes, there's some concern that the very same models that assist in biomedical breakthroughs may also be exploited by bad actors . To "prevent harm from materializing," as Heidecke put it, these forthcoming models need to be programmed to "near perfection" to both recognize and alert human monitors to any dangers.
"This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said.
Instead of heading off such dangerous capabilities at the pass, though, OpenAI seems to be doubling down on building these advanced models, albeit with ample safeguards.
It's a noble enough effort, but it's easy to see how it could go all wrong.
Placed in the hands of, say, an insurgent agency like the United States' Immigrations and Customs Enforcement, it would be easy enough to use such models for harm. If OpenAI is serious about so-called "biodefense" contracting with the US government, it's not hard to envision a next-generation smallpox blanket scenario.
More on OpenAI: Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions
Share This Article