The Impact of Artificial Intelligence (AI) in Elections

Deceptive artificial intelligence (AI)-generated content poses a threat to elections, voters, and our democracy. Much of the public dialogue focuses on AI’s ability to generate and distribute false information, and government officials are responding by proposing rules and regulations aimed at limiting the technology’s potentially negative effects.  

The Challenge 

2024 will be the first American presidential election in which AI-generated content is widely available. We are already seeing a proliferation of deepfakes that can deceive voters and spread disinformation. Deepfakes could mislead voters about where, when, and how they can vote, or inaccurately portray a candidate making controversial statements or participating in scandalous activity. 

The threat is not theoretical. In 2024, deepfake robocalls during the New Hampshire primary sought to discourage people from voting. The Federal Communications Commission (FCC) subsequently issued a unanimous ruling in February that made it illegal for robocalls to use AI-generated voices. This decision applies to all forms of robocall scams, including those related to elections and campaigns. Separately, the FCC is considering a proposal that would require disclosures of AI-generated material in campaign ads on broadcast television and radio. In 2023, a fake news outlet on X posted a deepfake audio recording of a Chicago mayoral candidate making it appear the candidate condoned police violence. These examples represent the tip of the iceberg. 

Public Opinion 

Americans are concerned about the impact that AI could have on elections, with around one-half of the population expecting negative consequences for election procedures and a further deterioration of campaign civility. A UChicago Harris/AP-NORC poll released in November found a bipartisan majority of U.S. adults are worried about the use of AI “increasing the spread of false information” in the 2024 election. A Morning Consult-Axios survey found an uptick in recent months in the share of U.S. adults who said they think AI will negatively impact trust in candidate advertisements, as well as trust in the outcome of the elections overall. Nearly 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race.  

The Role of Business  

In February 2024 at the Munich Security Conference, 20 leading technology companies including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X signed the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” and pledged to work together to detect and counter harmful AI content. The private sector can support voter education and raise public awareness of the risks involved with AI by continuing to draw attention to the issue so that it does not slip out of the public consciousness. But the business community cannot do this alone. The federal government needs to provide oversight and regulation of this new technology to ensure our elections are secure and the rights of voters are protected. 

Federal Legislation   

Policymakers at the federal level are primarily focused on minimizing the impact of AI-driven election disinformation. The proposals they draft often seek to prohibit the use of AI for deceptive purposes in elections or require disclosure of the use of AI in campaign speech. 

On June 18, 2024, PRINTING United Alliance participated in a Business for America (BFA) webinar entitled, Deceptive Artificial Intelligence (AI) in the 2024 Election: A conversation with Senator Amy Klobuchar. Senator Klobuchar discussed a package of bipartisan bills seeking to protect the elections from deceptive AI and to provide guardrails for the use of AI in elections.  

The Alliance joined a BFA coalition letter in support of bipartisan legislation named Protect Elections from Deceptive AI Act (S. 2770). Co-sponsored by Senators Klobuchar (D-MN) and Josh Hawley (R-MO) this bill would ban purposely misleading AI-generated audio and visual media of a federal candidate in political ads, with exceptions for parody and satire. A parallel House bill (H.R. 8384) has been introduced by Reps. Derek Kilmer (D-WA) and Tony Gonzalez (R-TX). 

The main provisions of the bipartisan legislation are as follows: 

  • Amends the Federal Election Campaign Act of 1971 (FECA) to ban individuals, political committees, or any other entity from knowingly distributing materially deceptive AI-generated audio or visual media of a federal candidate with the intent to influence an election or solicit funds. 
  • Consistent with the First Amendment, the bill has exceptions for parody, satire, and the use of AI-generated content in news broadcasts. 
  • Allows federal candidates to seek injunctive or other equitable relief to prohibit the distribution of deceptive AI content or may seek civil damages against the distributor. 
  • Allows federal candidates targeted by this materially deceptive content to have content taken down and enables them to seek damages in federal court.

Two additional bills in the package that Sen. Klobuchar discussed in the webinar were the AI Transparency in Elections Act (S. 3875), co-sponsored by Senators Klobuchar and Lisa Murkowski (R-AK) and Preparing Election Administrators for AI Act (S. 3987), co-sponsored by Senators Klobuchar and Susan Collins (R-ME). In the House, bill H.R. 8353 has been introduced in the Administration Committee and is being co-sponsored by Rep. Chrissy Houlihan (D-PA) and Rep. Brian Fitzpatrick (R-PA). 

Critics of S. 2770 and S. 3875 claim the bills would give the federal government undue, unconstitutional power over state administration of elections, often referred to as “federalizing” elections. But Sen. Klobuchar, being aware of these concerns, has repeatedly explained that her proposals only apply to federal elections.  

Both S. 2770 and S. 3875 passed the Senate Rules Committee in May 2024. It is Sen. Klobuchar’s intention to move all three bills to a floor vote by September 2024. 

A coalition of House Democrats have proposed legislation that aims to minimize the impact of AI technologies on all U.S. elections by establishing penalties and disclosure requirements around the use of the emerging capabilities in election messaging. Congresswoman Shontel Brown (D-Ohio) has introduced The Securing Elections from AI Deception Act, legislation to prohibit the use of AI to deprive or defraud individuals of their right to vote and require disclaimers on AI-generated content. The legislation would be enforced by the Federal Trade Commission (FTC) and applies to federal, state, and local elections. 

These legislative moves on both sides of the aisle indicate that there is clearly momentum in favor of continued government action around AI. 

Moving Forward 

In the 2024 election cycle, AI has already been deployed to produce fake audio and images, and lawmakers have largely been left scrambling to try to regulate the industry as it charges ahead with new developments. There’s no denying that AI can be used for good, but the nefarious uses of AI in politics is a threat to elections, voters, and our democracy. Limiting deceptive AI election content through federal legislation is a worthy nonpartisan goal. 

Stephanie Buka Government Affairs Coordinator

Stephanie Buka is the Government Affairs Coordinator for PRINTING United Alliance. In this role, she supports Ford Bowers, CEO, the Government Affairs team, and coordinates efforts with lobbying firm, ACG Advocacy. She manages all aspects of grassroots advocacy campaigns, including facilitating timely call-to-action alerts and updates to The Advocacy Center on key federal and state legislative issues. As a member of the Office of Corporate Communications, Buka manages the content and audience building responsibilities for the Government Affairs team. She is also responsible for the administration of the Alliance's political action committee, PrintPAC.

Prior to joining the Alliance, Buka served as a senior legislative researcher, and later as a constituent services coordinator, for the 15-member legislative body representing 1.3 million residents of Allegheny County, Commonwealth of Pennsylvania. In addition to drafting legislation and addressing constituent concerns, Buka cultivated strong relationships with appointed and elected officials at the local, state, and federal levels of government.

Buka holds a master’s degree in Public Policy and Management from the University of Pittsburgh, Graduate School of Public and International Affairs (GSPIA). She also earned a master's degree in Criminology from Indiana University of Pennsylvania, along with a Certificate in Forensic Science and Law from Duquesne University.

Related Content

}