OpenAI Sets Rules to Fight Election Misinformation
OpenAI introduced a new set of rules to prevent mass production of election misinformation. The ChatGPT creator declared politicians and those involved in creating their campaigns are not allowed to use its AI technology for the 2024 elections.
The AI startup said it is actively working to restrict the use of its AI tools for creating “misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates.” For example, OpenAI has trained its text-to-image AI model DALL·E to deny requests related to image generation of real people, including political candidates taking part in the 2024 elections.
OpenAI also won’t allow users to create chatbots that impersonate real people like political candidates and institutions like the government. Furthermore, it won’t allow users to generate content discouraging people from voting or misleading them to think they aren’t eligible to vote.
The AI startup has partnered with the National Association of Secretaries of the State (NASS), America’s oldest nonpartisan professional organization for public officials, to ensure all ChatGPT procedural election queries redirect to the CanIVote.org site.
OpenAI is increasing transparency on its sources of AI-generated information, promising to provide users with attribution and links to real-time news and other sources. Furthermore, it plans to implement the Coalition for Content Provenance and Authenticity’s cryptography-based digital credentials for DALL·E-generated images. The company also announced that it’s working on a provenance classifier for detecting DALL·E images.
OpenAI’s election measures follow the lead of several tech companies that have updated their election policies to mitigate the risks of the new, rapidly evolving AI technologies.
In late 2023, Google announced restrictions on answers generated by its AI tools related to election questions. It also announced that it will demand political campaigns advertising on Google to disclose AI use, similar to Meta’s updated rules. YouTube made a similar announcement, requiring content creators to state if they used AI in their videos.
Despite these efforts, tech companies struggle to find the right strategies to protect election integrity and prevent any type of AI-fueled misinformation. An August report by the Washington Post, for example, showed OpenAI failed to enforce its policies regarding political campaigns.
The lack of federal regulation means companies like OpenAI can get away with measures that don’t work. The Federal Election Commission is currently evaluating whether “fraudulently misrepresenting other candidates or political parties” applies to AI-generated content, but no uniform standard governs how politics can use AI.