"You can't prevent people from creating nonsense or dangerous information or whatever."

Open Sesame!

Meta is cracking the AI arms race wide open. Literally.

According to a report from The New York Times, Meta-formerly-Facebook is doubling down on its decision to make its large language model called LLaMA (Large Language Model Meta AI) — which competes with the likes of OpenAI's GPT-4 — open source.

"The platform that will win will be the open one," Yann LeCun, Meta's chief AI scientist, told the NYT, adding that he believes keeping powerful models behind closed doors is a "huge mistake."

It's a remarkably different strategy compared to those of Meta's competitors Google, Microsoft, and OpenAI, which have argued that due to the potential for large-scale misuse, society is safer when the metaphorical Krabby Patty Formula is kept behind closed doors.

But according to LeCun, that code-hoarding strategy is dangerous and a "really bad take on what is happening."

"Do you want every AI system to be under the control of a couple of powerful American companies?" LeCun told the NYT.

No Good Option

As the NYT points out, Meta began to dip its toes into the open-source waters back in February, when it made the underlying code for its advanced large language model available for download via email to anyone that Meta deemed safe.

The code, however, was leaked to 4Chan almost immediately. In an experiment, researchers at Stanford used the renegade LLM to build an AI system to capture how it behaved. But they were shocked to find that it was able to generate some seriously problematic text.

Stanford researcher Moussa Doumbouya reportedly told colleagues that making this technology widely available would be like making "a grenade available to everyone in a grocery store," according to the NYT.

LeCun, for his part, seems to disagree with that conclusion.

"You can't prevent people from creating nonsense or dangerous information or whatever," LeCun told the NYT. "But you can stop it from being disseminated."

"Progress is faster when it is open," he added. "You have a more vibrant ecosystem where everyone can contribute."

But considering that at-scale content moderation is imperfect enough as it is even without generative AI, LeCun's argument isn't exactly bulletproof.

Whether Meta's open-source — or OpenAI and Microsoft's secrecy — approach will turn out to be the winning strategy remains to be seen.

As the AI war is really starting to heat up, companies are drawing their lines in the sand — and the stakes are higher than ever before.


Share This Article