OpenAI, Google and Others To Fight AI Election Interference
A group of 20 tech companies pledged to fight against deceptive AI content interfering with this year’s elections at the Munich Security Conference.
The full list of signatories includes Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
The decision to join forces comes in a critical year when four billion people across 40 countries are set to vote. The signatories are companies developing generative AI models that are capable of producing deceptive election-related content or social media platforms that face the challenges of unwittingly hosting and spreading such harmful content on their sites.
The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” addresses digital content that alters the appearance, voice, or actions of political figures or misinforms voters in any way. The accord highlights that AI content like photos, videos, and audio pose a bigger threat to fair elections than text.
The accord consists of eight commitments. These include developing technologies to detect and detect and label AI-generated content that could deceive voters, stop the distribution of such content, successfully address it when detected, and raise public awareness and media literacy through educational campaigns.
Nora Benavidez, senior counsel for Free Press, an advocacy group that supports an open internet, criticized the accord on X. “Voluntary promises like the one announced today simply aren’t good enough to meet the global challenges facing democracy. Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises.”
On the other hand, Munich Security Conference Chairman Christopher Heusgen was quite optimistic: “The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.”
AI models capable of creating videos, images and text from prompts in just seconds could contribute to a rising number of deep fakes and robocalls impersonating politicians. This type of fake content could have a detrimental impact on the upcoming elections.
For instance, a recent fake robocall imitating President Joe Biden’s voice that urged Democrats not to vote in the state’s presidential primary election has reached more than 20,000 people in a short amount of time. To prevent such malicious attempts, the Federal Communications Commission (FCC) voted to outlaw robocalls featuring AI-generated voices.