It's a real threat.
Assuming Direct Control
Meta executive and "godfather of artificial intelligence" Yann LeCun has had it with people ringing the alarm bells about a hypothetical AI doomsday.
In a lengthy post on X-formerly-Twitter, the computer scientist argued that there's a far bigger threat hovering over the burgeoning industry: powerful companies seizing control of the future of AI and using it to prop up their wealth and influence.
It's a pertinent point, as an increasingly smaller number of AI companies are starting to emerge as the early winners in the AI race, claiming an ever-growing slice of the highly lucrative AI pie.
"[OpenAI CEO Sam] Altman, [Google DeepMind's Demis] Hassabis, and [Anthropic's Dario] Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote in his thread, referring to the leaders of some of the biggest emerging players in the space. "They are the ones who are attempting to perform a regulatory capture of the AI industry."
Instead of buying into "fear-mongering campaigns" about AI running amok, triggering the next pandemic or nuclear armageddon, LeCun argues that we should be wary of a "small number of companies" taking the reins and determining the regulatory framework.
AI isn't "some natural phenomenon beyond our control," he added. "It's making progress because of individual people that you and I know. We, and they, have agency in building the Right Things."
Stop Your Doomsaying
It's not the first time LeCun has called into question the kind of "Terminator"-style doomsday scenarios being hinted at by the likes of Altman.
"If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither," he told the Financial Times last month.
"Intelligence has nothing to do with a desire to dominate," he added. "It's not even true for humans."
Instead of focusing on these threats, LeCun argued in his latest thread that we should develop "guardrails" and "objectives" that can make AI "safe and controllable."
In addition, LeCun argued that AI systems should be made open source.
"In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we *need* the platforms to be open source and freely available so that everyone can contribute to them," he wrote in his latest X thread.
"Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture," he added.
Share This Article