Generative artificial intelligence has the potential to dramatically disrupt the outcome of the 2024 US election, said experts at a briefing Nov. 27, hosted by the Brennan Center for Justice.
Once confined strictly to science fiction, AI is now everywhere. Generative AI’s capabilities to manipulate data, impersonate experts, candidates, and political leaders, and spew out misinformation on social media arrives at a time when voters are already challenged with separating fact from falsehoods.
In a surprise moment during the briefing, moderator Zoe Schiffer, managing editor at Platformer News, featured a clip that used generative AI to create a cloned version of speaker Lawrence Norden, Senior Director of the Elections and Government Program at the Brennan Center for Justice at New York University School of Law. The AI-generated Norden was able to accurately replicate Norden’s concerns about AI in the upcoming elections, but added additional disinformation and hyperbole about China, Iran and Russia’s alleged interference.
Deep Fakes
The real Norden broke down his concerns down into four categories: imitation threat, also known as deep fakes, in which Chat GPT is used to generate articles that look like they are coming from election offices or candidates.
AI can also be used for harassment of election officials, said Norden, with AI-generated emails flooding election offices with frivolous records’ requests. “You could just imagine offices being inundated with thousands and thousands of requests that keep election officials from doing their work,” he said.
A third threat is cyberattacks, potentially against election offices, said Norden, adding that his fourth concern was public fear of AI. “There’s been so much written about it and there’s been so much undermining of confidence in our elections already that AI itself, and the claims for what it can do may add to this undermining of confidence in election,” he said.
Multifactorial authentication — requiring people to put in a password sent to their phones — can curb some of the issues with AI-generated material, said Norden. Voting machines and electronic poll books must have paper backups, he said.
Biden Executive Order
“We need to make sure that election officials, the media are giving the public accurate information about elections,” said Norden.
In October, the Biden Administration announced an executive order, attempting to place safeguards and oversight on the use of AI. Mia Hoffman, a research fellow at the Center of for Security and Emerging Technology at Georgetown University, said the action was a good start.
“At a high level, it does a lot of things right. I think they’re trying to address a lot of different concerns with one directive and that’s hard to do,” she said.
In the context of elections, the executive order addresses disinformation with watermarking techniques: hidden patterns are going to be embedded in AI generated content so that it will be detectable and identifiable. “We can actually tell what information and what media is real and what’s fake,” said Hoffman. She added that she was also excited about investments in research into authentication technology.
Risk Assessment
“Being able to tell that some information has not been manipulated might actually become more valuable than being able to tell if something has been AI-generated or not because it just requires kind of trustworthy issuers of news to be able to comply with this authentication rather than making everybody who generates something with AI use watermarking,” she said.
Election hardware and software will be subject to an annual risk assessment when AI is used,” said Hoffman, noting that the National Institutes of Safety and Technology framework — the gold standard for risk assessment — will be used.
Mekela Panditharatne, Counsel for the democracy program at the Brennan Center for Justice, said more needed to be done by Congress to safeguard not only elections, but general use of AI.
Voter Suppression
“Absent to Congressional action, theres sort of a modest amount that can be done and made enforceable by the federal government, and we kind of saw that with this order. So given those constraints. I do think it’s an admirable effort.”
Panditharatne said the order invokes the Defense Production Act, a national security law. “But when you look at what’s enforceable, it’s a very small portion of items. Elements like voter suppression, the use of AI in election administration, and election security aren’t expressly recognized. So as the order is implemented, there are certainly important steps that need to be taken to ensure that that those elements are sufficiently protected, but much more is needed by Congress as well,” she said.
False Narratives
Post-election, Panditharatne said she expects to see AI employed to generate distrust of results, as with the 2020 election. “We might see sort of amplification of false narratives about the election process, potentially deep fakes of election officials manipulating the vote count or preventing people from voting. That’s something that we should be worried about potentially seeing,” she said.
All of the experts encouraged voters to deeply examine the sources from which they are receiving election-related content.