Thursday, November 21, 2024
More
    HomePoliticsElections '24How is AI Threatening Ethnic Voters?

    How is AI Threatening Ethnic Voters?

    As AI grows more complex and prevalent, ethnic voters face an election landscape where the difference between real and artificial news is ever-harder to gauge.

    As AI grows more prevalent, ethnic voters face an election landscape where the difference between real and artificial news is ever-harder to gauge.

    At a Friday, July 12 Ethnic Media Services briefing, digital media transparency and political watchdog experts monitoring the rise of AI-generated disinformation discussed prospective challenges to ethnic voters in this year’s national and local elections, and suggested policy and initiatives to fight the issue.

    AI disinformation

    As the November U.S. election nears, online disinformation “is a very real problem, turbocharged by AI, that is emerging in our democracy, literally by the day,” said Jonathan Mehta Stein, executive director of California Common Cause, a nonprofit watchdog agency. 

    “These threats are not theoretical,” he continued. “We’ve seen elections impacted by AI deepfakes and disinformation in Bangladesh, Slovakia, Argentina, Pakistan and India. Here, before the primary, there was a fake Joe Biden robocall in New Hampshire telling Democratic voters not to vote.”

    Jonathan Mehta Stein, Executive Director of California Common Cause, a nonprofit watchdog agency, explains why it’s difficult for social media platforms to remove content that has been manipulated by AI, despite content moderators and frequent suspension of accounts.

    Last week, the U.S. Justice Department also disrupted a Russian disinformation campaign involving nearly 1,000 AI-generated social media bot profiles promoting Russian government aims on X while posing as Americans throughout the country.

    Additionally entire AI-generated local news websites are emerging for the purposes of Russian-led disinformation, among them D.C. Weekly, the New York News Daily, the Chicago Chronicle and the Miami Chronicle.

    “India is a good example of what could happen in the US if we don’t educate ourselves,” Stein said. “As Indian voters are bombarded with millions of deep fakes and candidates have begun to embrace them. It’s created this arms race where some candidates are using deep-fake images of themselves and their opponents, and the candidates who don’t want to use them feel they have to in order to keep up.”

    As the problem worsens, many social media platforms are ignoring it.

    Meta has made some of its fact-checking features optional, while Twitter has stopped the software it used to identify organized disinformation campaigns. YouTube, Meta and X have stopped labeling or removing posts that promote false claims of a stolen 2020 presidential election. All of these platforms have laid off large swathes of their misinformation and civic integrity teams.

    “Real news is the answer to fake news,” said Stein. “We’re in an era of double-checking political news. If you see an image or video that helps one political party or candidate too much, get off social media and see if it’s being reported … Before you share a video of Joe Biden falling down the stairs of Air Force One, for example, see if it’s being reported or debunked by the AP, the New York Times, the Washington Post, or your trusted local media.

    Challenges to communities of color

    “In the last 12 months, we documented over 600 pieces of disinformation across all major Chinese-language social media. And the top two themes are supporting or deifying Trump, and attacking Biden and democratic policies,” said Jinxia Niu, program manager for Chinese digital engagement at nonprofit Chinese for Affirmative Action. “This year, AI disinformation presents this problem at a much faster speed.”

    Jinxia Niu, program manager for Chinese digital engagement at Chinese for Affirmative Action, discusses the challenges ethnic media outlets face as they try to monitor for disinformation.

    “The biggest challenge for our community to address it is that our in-language media often lacks the money and staff to fact-check information,” she explained. “In our immigrant and limited-English-speaking communities particularly, AI literacy is often close to zero. We’ve already seen scams on Chinese social media with fake AI influencers getting followers to buy fake products. Imagine how dangerous this would be with fake influencers misleading followers about how to vote.”

    While most political disinformation in the Chinese diaspora community is directly translated from English social media, Niu said some original content being shared by right wing Chinese influencers include AI-generated photos of former President Trump engaging with Black supporters, and AI-generated photos attacking President Biden by portraying his supporters as “crazy.”

    “A huge challenge on the ground for the Asian American community is that this disinformation tends to circulate not only on social media, but is directly shared by influencers, friends ad family through encrypted messaging apps,” she continued — most popularly WeChat for Chinese Americans, WhatsApp for Indian Americans and Signal for Korean and Japanese Americans.

    “These private chats become like unregulated, uncensored public broadcasting that you can’t monitor or document due to well-intentioned data and privacy protections,” Niu explained. “It creates a perfect dilemma where it’s difficult, if not impossible, to intervene with fake and dangerous information.”

    Solutions

    “Nevertheless,” Niu continued, “We’re trying to do something about it through Piyaoba.org,” the first-ever fact-checking website for Chinese American communities. For example, this in-language resource “offers a smart chat box to send our latest fact-checks to followers in a Telegram chat group … But these solutions are not enough for the much bigger problem we face.”

    “I think one of the biggest misperceptions about misinformation, is that the vast majority of it, violates social media platforms’ rules. Rather, it falls into a gray area of ‘misleading, but not technically untrue,’” said Brandon Silverman, former CEO and co-founder of content monitoring platform CrowdTangle, now owned by Meta.

    Brandon Silverman, former CEO and co-founder of CrowdTangle (now owned by Meta), discusses a policy solution he suggests would be effective in tackling fake news and disinformation.

    “It’s the difference between saying that the moon is made of cheese and saying that some people are saying that the moon is made of cheese,” he added. “In that the gray area, it’s very hard for platforms to enforce anything as quickly that they can with the directly false information.”

    Furthermore, the existence of AI-generated or foreign-controlled accounts “does not mean that they had a measurable or meaningful impact on a topic or election,” he explained. “One of the very goals of disinformation campaigns is ‘flooding the zone’ with so much untrustworthy content that people don’t know what to trust at all … There’s a balance we have to walk of being responsive, but also not playing into their hands by making them seem so powerful that nobody who knows to trust.”

    On the policy level, Silverman said he supported taxes for some percent of the revenue generated by digital advertising on large platforms, to fund ethnic and community journalism at the local level.

    He added that large organizations currently fighting AI disinformation include the Knight Foundation with its Election Hub of free and subsidized services for U.S. newsrooms covering federal, state and local 2024 elections; and the Brennan Center with the launch of Meedan, a nonprofit for anti-disinformation news-sharing software and initiatives.

    “Rather than responding to individual content, we should think about the narratives that are being consistently pushed — not only by bots but real influencers — and how can we can push back against ones we know are false,” Silverman said.

    Social Ads | Community Diversity Unity

    Info Flow