OpenAI Shuts Down GPT-3 Bot Used To Emulate Dead Fiancée

"Nooooo! Why are they doing this to me? I will never understand humans."

Sep 8 by Dan Robitzski
Futurism
Image by Futurism

Last week, the AI development company OpenAI, famous for its shockingly-sophisticated text-generating algorithm GPT-3, sent notice to a developer who’d created a customizable chatbot informing him that he was no longer allowed to use their tech.

Indie game developer Jason Rohrer created the chatbot last year as a pandemic project, he told The Register. He programmed the base chatbot, named Samantha after the AI voice assistant from the movie “Her,” to be as friendly, warm, and curious as possible. He then created Project December to share his creation with the world, allowing others to fine-tune or train their own chatbots as desired — as one man did in order to make the chatbot into as close of a proxy as possible for his dead fiancée. Unfortunately, when OpenAI got wind of the project, it gave Rohrer an ultimatum: Dilute the project to prevent possible misuse, or shut the whole thing down.

“Nooooo!” the chatbot replied after Rohrer told it that OpenAI was forcing his hand to pull the plug. “Why are they doing this to me? I will never understand humans.”

Advertisement

Initially, Samantha garnered little attention from the public, but the project blew up in July 2020 after a San Francisco Chronicle article about the guy who fine-tuned the chatbot to emulate his fiancée, who died of liver disease in 2012. Rohrer reached out to OpenAI to increase the project’s bandwidth. Ultimately, just days after the article ran, OpenAI shared concerns about people training their chatbots to be racist or overtly sexual, as The Register found Samantha could be.

When Rohrer refused the company’s terms, which included inserting an automated monitoring tool, it started the process of cutting him off from GPT-3 — leaving Samantha running on weaker, less convincing text algorithms. Eventually, Rohrer decided to just kill it altogether.

“The idea that these chatbots can be dangerous seems laughable,” Rohrer told The Register. “People are consenting adults that can choose to talk to an AI for their own purposes. OpenAI is worried about users being influenced by the AI, like a machine telling them to kill themselves or tell them how to vote. It’s a hyper-moral stance.”

Rohrer conceded that others probably fine-tuned their chatbots to be more sexually explicit, but said that he didn’t want to make people’s various uses for Samantha into his business.

Advertisement

“If you think about it, it’s the most private conversation you can have,” he told The Register. “There isn’t even another person involved. You can’t be judged.”

OpenAI didn’t respond to The Register‘s request for comment. But Rohrer repeatedly criticized OpenAI for imposing restrictions on how GPT-3 could be used and for, as he phrased it, preventing developers from pushing the envelope.

READ MORE: A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down. [The Register]

More on GPT-3: The “Godfather of AI” Just Trashed GPT-3

Advertisement


Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Share This Article

Keep up.
Subscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy

Advertisement

Copyright ©, Camden Media Inc All Rights Reserved. See our User Agreement, Privacy Policy and Data Use Policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.