He's super disappointed. Poor guy.
On Thursday, the Federal Trade Commission announced it had opened an extensive investigation into ChatGPT maker OpenAI over concerns that it's violated consumer protection laws by harming users' reputations.
The regulator is now demanding that the Sam Altman-led company provide detailed accounts of every time somebody complained of its products making "false, misleading, disparaging or harmful" statements about users.
The FTC also requested information regarding a recent data leak that may have exposed sensitive payment-related information to users.
The investigation isn't sitting well with Altman, who took to Twitter to lament the decision.
"It is very disappointing to see the FTC's request start with a leak and does not help build trust," he wrote.
But Altman was careful not to ruffle too many feathers.
"That said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law," he added. "Of course we will work with the FTC."
Altman has long argued that AI should be regulated, albeit in ways that won't hurt his company's bottom line.
Earlier this year, the CEO testified in front of a Senate committee, practically begging lawmakers to come up with meaningful AI regulation.
His concerns tend to be long term. Despite spearheading one of the biggest players in the AI industry, Altman has repeatedly warned about the threat of advanced AI going rogue.
"I think if this technology goes wrong, it can go quite wrong," he told lawmakers at the hearing. "And we want to be vocal about that. We want to work with the government to prevent that from happening."
But given his latest comments about the FTC's investigation, there's clearly plenty of nuance to the ways Altman envisions future AI regulations.
Besides, you know what else doesn't help to build trust with regulators? Leaking sensitive personal data.
More on the investigation: FTC Investigating ChatGPT for Saying Harmful Things About People
Share This Article