They cloned a little too much of ChatGPT's capabilities.

Pack It Up

Just days after unveiling a demo of a ChatGPT clone called Alpaca, Stanford researchers have taken their AI offline, citing concerns over costs, and more importantly, safety, The Register reports.

"The original goal of releasing a demo was to disseminate our research in an accessible way," a spokesperson for Stanford's Human-Centered Artificial Intelligence institute told The Register in a statement. "We feel that we have mostly achieved this goal, and given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo."

Bad Trips

Clearly, some of the decision was motivated by practical concerns. Reportedly built on just a $600 budget by fine-tuning Meta's existing LLaMa 7B large language model (LLM), Alpaca's main appeal was to demonstrate how cheap it can now be to rival the capabilities of tech by OpenAI and Google. Its upkeep proved more expensive, however.

But Alpaca also might have been too good of an imitation, aping its inspiration's less than desirable trait: a propensity for spewing misinformation, a misfiring of LLMs which the AI industry likes to describe as a "hallucination."

This came as no surprise to the researchers, though, who acknowledged the shortcoming in their announcement of Alpaca's release, stating that "hallucination in particular seems to be a common failure mode for Alpaca, even compared to [GPT-3.5]."

Among its obvious failures: getting the capital of Tanzania wrong, and churning out convincingly written misinformation — without resistance — on why the number 42 is the optimal seed for training AIs.

Budget King

While the researchers didn't provide all the specifics of how Alpaca might've further gone off the rails and taken a few liberties with the truth once it was open to the public, based on ChatGPT's own failures, it's not hard to imagine what it might have looked like, or the degree to which it was an endemic issue.

Still, for those that wish to build on the Stanford researcher's findings, Alpaca's code is still available on GitHub. Flawed as it may have been, it's still a triumph of low-budget AI engineering, for better or worse.

"We encourage users to help us identify new kinds of failures by flagging them in the web demo," the researchers wrote in the release. "Overall, we hope that the release of Alpaca can facilitate further research into instruction-following models and their alignment with human values."

More on AI: Google's Hot New Bard AI Is Already Spouting Ridiculous Conspiracy Theories


Share This Article