According to the company, it's all part of the plan.
Within the span of a weekend, Facebook's new chatbot learned to be a racist conspiracist— and the company has already had to respond to headlines about it.
The seeming biases of BlenderBot3, Facebook-turned-Meta's new chatbot, which was recently made available to the public as part of a beta test, made headlines earlier this week. As Insider reports, it has already been caught making conspiratorial statements, anti-Semitic comments — and, ironically, calling Meta CEO Mark Zuckerberg "a bad person."
In an updated statement following Insider's reporting, Meta AI executive Joelle Pineau defended the bot's problematic comments, arguing that they were integral to Meta's plans.
"While it is painful to see some of these offensive responses," Pineau wrote, "public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized."
Translation: bots learning to say weird, bigoted stuff — including that Open Foundations philanthropist George Soros "has been known to create" viruses like swine flu — is, apparently, all part of the process, a puzzling response that isn't exactly confidence inducing.
Wink and Nod
Pineau's statement goes on to say that the company requires everyone who interacts with BlenderBot3 to be 18 or older, know that it's for research and entertainment, acknowledge "that [the chatbot] can make untrue or offensive statements," and agree not to intentionally goad or trigger it into being offensive.
Users self-select these acknowledgements when entering the BlenderBot3 website — a safeguard that could easily be sidestepped.
This isn't, of course, the first time a chatbot or an artificial intelligence has been caught spewing bigoted or otherwise messed up outputs — but Meta's shoulder-shrugging defense of the bot in the face of BlenderBot3's comments does leave a lot to be desired.
Share This Article