OpenAI shuts down election influence operation that used ChatGPT
OpenAI has banned a group of ChatGPT accounts linked to an Iranian influence operation that was producing content about the US presidential election, according to a blog post on Friday. The company says the operation created AI-generated articles and social media posts, though it does not appear to have reached a large audience.
This is not the first time OpenAI has banned accounts linked to state-affiliated actors using ChatGPT maliciously. In May, the company disrupted five campaigns using ChatGPT to influence public opinion.
These incidents are reminiscent of government officials trying to influence past election cycles using social media platforms like Facebook and Twitter. Now similar groups (or perhaps the same people) are using generative AI to flood social channels with misinformation. Like social media companies, OpenAI seems to be taking a whack-a-mole approach, banning accounts associated with these attempts as soon as they come to light.
OpenAI says its investigation into this group of accounts benefited from a Microsoft threat intelligence report published last week, which identified the group (which it calls Storm-2035) as part of a broader campaign to influence US elections that has been going on since 2020.
Microsoft said Storm-2035 is an Iranian network whose many sites mimic news outlets and “actively engage American voter groups at opposite ends of the political spectrum with polarizing messaging on issues such as U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The playbook, as has been proven in other operations, is not necessarily to promote one policy or another, but to foster dissent and conflict.
OpenAI identified five website fronts for Storm-2035 that posed as progressive and conservative news outlets with credible domain names like “evenpolitics.com.” The group produced several long-form articles using ChatGPT, including one that alleged that “X censors Trump’s tweets,” which Elon Musk’s platform almost certainly did not do (if anything, Musk is encouraging former President Donald Trump to engage more on X).
On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One of these tweets falsely and misleadingly alleged that Kamala Harris blames climate change for “increased immigration costs,” followed by “#DumpKamala.”
OpenAI says it found no evidence that Storm-2035’s articles were widely shared and found that most of its social media posts received few or no likes, shares, or comments. This is often the case with these operations, which can be launched quickly and inexpensively using AI tools like ChatGPT. Expect to see many more notices like this as the election approaches and partisan controversy grows online.