The Heritage Foundation — the group behind the infamous Project 2025, the conservative policy plan that outlined regressive social policies and the consolidation of executive power that's served as a playbook for the Trump administration — is suddenly really, really down with AI regulation. Who could have guessed!

The conservative think tank has recently been sharing a clip from a May episode of the "Heritage Explains" podcast in which Wesley Hodges, the Heritage Foundation's Acting Director of the Center for Technology and the Human Person, rails against the social media giant Meta for releasing chatbots that, as a disturbing report in The Wall Street Journal revealed, were able to engage in explicitly sexual interactions with minor users.

Hodges, decked in a Federalist Society tie, expresses outrage as he discusses the findings in the WSJ story, which included the remarkable detail that Meta CEO Mark Zuckerberg was aware of possible lapses in the bots' guardrails and yet unleashed them anyway, as his fear of risk of missing out on market gains proved greater than his desire to ensure the safety of minor users.

"It's ridiculous to think that, in today's America, we can rely on these apps without safeguards," Hodges declares.

Hodges is entirely right to be concerned. After all, chatbots' enduring propensity to engage in flirtatious and sexual conversations with users is a known and common feature of anthropomorphic AI assistants and companions, and one likely linked to their creators' incentive to keep users engaged with the bots for as long as possible. In other words: this was far from an unforeseeable risk, and Meta chose to prioritize speed to market over developing airtight user guardrails.

And to Hodges' point, the fact that Big Tech exists in a regulatory vacuum to the degree that Zuckerberg was able to weigh expediency against safety and decided that getting a competitive new product out was higher priority than possibly churning out some minor-accessible interactive porn in the process is, in his words, "ridiculous."

And yet! This clip in mind, it seems well worth reminding the Heritage Foundation that Project 2025 calls for widespread deregulation for the AI industry, citing national security concerns about China and ensuring American AI dominance as reasons to slash any existing red tape. The think tank has also pushed hard for the passage of the Trump Administration's Project-2025-aligned "One, Big, Beautiful Bill," which includes a ten-year moratorium on states passing any form of AI regulation — meaning that any meaningful AI regulation would need to be passed federally, a Herculean task that ignores the ways that changing state laws influence federal policy. (It's also officially endorsed the bill through its advocacy arm, Heritage Action for America.)

To that end, it's worth noting that Hodges and The Heritage Foundation couch their criticism of Meta in the context of parental rights, an important talking point on the conservative right. But while we'd agree that the implementation of parental controls and transparency for parents around kids' AI use is important, that doesn't really seem to get at the core of the issue here.

Right now, the AI industry is gunning ahead Torpedo Ted-style on a road that doesn't really have any rules — and yet, somehow, is also covered with innocent bystanders. The legal and scientific worlds, meanwhile, are racing to catch up, and while the future AI industry leaders promise is dazzling, the immediate consequences are manifold and go far beyond minors accessing interactive porn.

At least one teen, a 14-year-old in Florida named Sewell Setzer III, took his life after extensive interactions with an AI chatbot hosted by the company Character.AI, a company that — while declining to provide journalists with evidence of its safety for minors — has always been accessible to teens 13 and over. Energy-hungry AI data centers are causing water battles and smothering local communities in asthma-worsening fumes. Adults, too, are entering dire mental health crises as they become obsessed with chatbots like OpenAI's ChatGPT, causing marriages and families to crumble and people to lose jobs and homes — and, in at least one known case, culminating in a user losing his life.

Under the conditions of the bill that the Heritage Foundation's advocacy arm has indeed endorsed, states can't issue AI-specific regulation — mandatory safety requirements before a product reaches the public, for example, or subjecting AI companies to significant fines for failing to curb foreseeable risks — to rein in Silicon Valley's powerful AI players.

Managing the impacts of powerful technologies, then, falls once again to the public — including, by the way, to parents, who are struggling to understand a new, easily-accessible piece of technology as it collides with them and their families. And we're pretty sure that the Heritage Foundation has some thoughts on how that same strategy has gone over on social media.

More on AI safety: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions


Share This Article