This is what it took?
Drawing a Line
OpenAI has told Washington, DC lobbying company FiscalNote that it can't advertise using ChatGPT for politics, Semafor reports.
The Sam Altman-led company jumped into action after FiscalNote proclaimed in a recent press release that it's "bringing the power of next-generation AI and workflow productivity to the multi-billion dollar lobbying and advocacy industry" by integrating ChatGPT into its platform.
The integration was supposedly meant to enhance "political participation," according to an earlier version of the press release, which clearly had OpenAI spooked. Shortly afterward, per Semafor, those last two words were replaced with "grassroots advocacy campaigns" in an editorial note, effectively putting distance between ChatGPT and any potential political use.
The incident sets a new precedent for the AI juggernaut — and possibly for the rest of the industry as well. According to Semafor, it's the first known instance of OpenAI policing how its technology is being advertised by third parties.
The use of AI in politics is a looming topic these days. For instance, we've already seen the Republican National Committee (RNC) make use of generative AI tech for a political ad — which doesn't bode well with a presidential election right around the corner.
According to OpenAI's usage policies, which were last updated back in March, using its products for political campaigning or lobbying is banned.
OpenAI also told Semafor that it's working on a machine learning classifier that can flag when ChatGPT is being used to generate large bodies of electoral campaign or lobbying-related materials.
As tools like ChatGPT become ubiquitous in modern society, companies like OpenAI are bound to face increased scrutiny over how their tech is being used.
And if the rocky history of social media moderation is anything to go by, policing how these products are being deployed, let alone advertised, is only going to become more difficult.
Share This Article