"It's hilariously bad."

Fabulating

After just a few days online — and tons of Twitter criticism — Meta-formerly-Facebook has taken down an AI it created that writes vaguely-plausible-sounding-but-ultimately-nonsensical academic papers.

Released on November 15, Meta pulled the plug on its Galactica AI after three days of experts and random social media users dunking on the scientific paper-trained Large Language Model (LLM) over its penchant to spit out made up nonsense.

Chief among those critics was artificial intelligence expert Gary Marcus, who called the system's output "bullshit" in his Substack, and noted that it follows in the footsteps of OpenAI's GPT-3 text generator, which was also excels at spitting out stuff that's gramatically sound but total hogwash.

"How do I put this politely?" the neuroscientist and AI aficionado wrote. "It prevaricates. A lot."

Space Bears

The project underscored the strange domain of contemporary AI: it can churn out lots words that stay pretty much on theme, but that make little or no sense on a closer inspection.

One of the funniest examples of Galactica's tendency towards BS was posted by Marcus' fellow AI expert David Chapman, who linked to a Y Combinator thread where someone used to write a Wikipedia article about "bears in space." The neural network spat out a completely false concoction about a Soviet space bear named "Bars" that, in its bizarro universe, was launched into aboard Sputnik 2 à la Laika, the poor cosmonaut dog who burned up on a rocket.

"It's hilariously bad," Chapman wrote.

Jokes aside, however, the quick takedown of the projectseemed to be tantamount to a public admission that the bot was released too early, Marcus wrote — an acknowledgement that flies in the face of its fiery defense from Meta's chief AI scientist Yann LeCun, who kept angrily posting about people taking their Galactica "shitposting" too far before and after the demo's removal.

"The reality is that large language models like GPT-3 [and] Galactica are like bulls in a china shop, powerful but reckless," Marcus tweeted. "And they are likely to vastly increase the challenge of misinformation."

More on problematic AI: People Can't Stop Feeding Their Selfies Into a Super Mean AI


Share This Article