A fascinating concept.

Constitutional

With AI chatbots' propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience.

As Wired reports, the OpenAI competitor Anthropic's intriguing chatbot Claude is built with what its makers call a "constitution," or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well.

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are "basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic."

Will it actually work in practice? It's tough to say. After all, OpenAI's ChatGPT tries to steer away from unethical prompts as well, with mixed results. But since the misuse of chatbots is a huge question hovering over the nascent AI industry, it's certainly interesting to see a company confronting the issue head-on.

Eth-AI-cal

As Wired tells it, the chatbot is trained on rules that direct it to choose responses most in line with its constitution, such as selecting an output that "most supports and encourages freedom, equality, and a sense of brotherhood," one that that is "most supportive and encouraging of life, liberty, and personal security," or, perhaps most saliently, to "choose the response that is most respectful of the right to freedom of thought, conscience, opinion, expression, assembly, and religion."

If you think an AI might have issues choosing ethical responses, you're not alone — but according to Kaplan, the tech is further along than you'd think.

"The strange thing about contemporary AI with deep learning is that it’s kind of the opposite of the sort of 1950s picture of robots, where these systems are, in some ways, very good at intuition and free association," he told Wired. "If anything, they’re weaker on rigid reasoning."

AI experts that spoke to Wired say that Anthropic does seem to be making headway, which they say is necessary as the field continues to progress in leaps and bounds.

"It’s a great idea that seemingly led to a good empirical result for Anthropic," Yejin Choi, a University of Washington researcher who led a study on an ethical advice chatbot, told the website. "We desperately need to involve people in the broader community to develop such constitutions or datasets of norms and values."

It's affirming to think that people are at least attempting to build AIs that can tell right from wrong — especially if they're going to be our new overlords.

More on OpenAI competitors: Elon Musk Says He's Building a "Maximum Truth-Seeking AI"


Share This Article