Because self-regulation always goes so well!

Bot's Club

The AI industry big boys just formed a brand new table at the Silicon Valley cafeteria.

OpenAI, Microsoft, and Google, in addition to the Google-owned DeepMind and buzzy startup Anthropic have together formed The Frontier Model Forum, an industry-led body that, per a press release, claims to be seeking to enforce the "safe and responsible development" of AI.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Microsoft president Brad Smith said in the statement. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."

In other words, it's a stab at AI industry self-regulation. But while it is good to see major industry players join forces to establish some best practices for responsible AI development, self-regulation has some serious limitations. After all, with no ability for the government to actually enforce any of the Frontier Model Forum's rules through actions like sanctions, fines, or criminal proceedings, the body, at least for now, is mostly symbolic. Extracurricular group activity energy.

Self-Regulation Station

It's also worth noting that some notable names were left out from the jump. The Mark Zuckerberg-helmed Meta-formerly-Facebook apparently isn't a member of the club, while Elon Musk and his newly-launched xAI, which was — sigh — apparently developed to "understand reality," were both left on the sidelines. (That said, though Meta, which has some pretty advanced models on deck, might have some room to complain about the snub, Musk and his stonerbot probably don't.)

The Forum does say that others can sit with them in the future, as long as they're making what the group deems to be "frontier models" — defined by the group as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks" — and promise to commit to a general and mostly unspecified commitment to safety and responsibility.

Again, we can't say it isn't good to see these kinds of discussions happening between major AI firms. But we also can't overstate the fact that these are all for-profit companies with a financial incentive to churn out AI products, and non-binding self-regulation is far from real and industry-wide government rules and oversight. Is it a start? Sure! But let's not let the buck stop here.

More on AI regulation: Ex-Google CEO Says We Should Trust AI Industry to Self-Regulate

Share This Article