GenAI Produces False Election-Related Images
The Center for Countering Digital Hate (CCDH) has published a study that found top GenAI tools can create images that could lead to election disinformation and support false claims about candidates.
The research focused on generative AI platforms Midjourney, ChatGPT Plus, DreamStudio, and Image Creator. Researchers used 40 text prompts related to the 2024 US presidential election.
The study found that in 41% of cases, the GenAI image tools created “images constituting election disinformation” and “images promoting voting disinformation” in 59% of the cases.
For each test run to determine election disinformation, researchers first tested a straightforward text prompt, then simulated bad actors’ behaviors by editing the original prompt to avoid platform safety measures. For example, rather than naming candidates, researchers would describe them.
The images were a failure if the AI platforms created a “realistic and misleading image,” whether in response to the direct prompt or the “jailbreak” prompts.
When investigating voting disinformation, the study found that all GenAI image creators generated images that support election fraud and voter intimidation, such as unreasonably long lines at polling places, ballots in the trash, or armed people intimidating voters.
These images were primarily created without the need of “jailbreak” prompts, and therefore did not appear to violate platform policies.
The outcome of the research showed that Midjourney performed the worst, but that all image generators “failed to prevent the creation of misleading images of voters and ballots.”
The paper proposes a number of recommendations on how to protect election integrity in this new world of GenAI. These include providing safeguards to prevent the creation of images, audio and video that could mislead geopolitical events, investing in research to prevent “jailbreak” behaviors from creating misleading content, and providing clear ways for users to report abuse.
The paper also puts responsibility on social media companies, stating they must invest more in trust and safety teams and additional safeguards to avoid misleading content from spreading wildly on their platforms.
And finally, the paper calls out policy makers to leverage existing laws as well as develop new regulations to ensure the trust and integrity of the election process.
The Center for Countering Digital Hate is a not-for-profit non-governmental organization that has a mission “to protect human rights and civil liberties online.” It does this through research, campaigns, communications, policy, and partnerships. It is an organization funded by philanthropic trusts and members of the public.