AI-Generated Election Content Is Here, And The Social Networks Aren’t Prepared

Earlier this year, on the eve of Chicago’s mayoral election, a video of moderate Democrat candidate Paul Vallas appeared online. Tweeted by “Chicago Lakefront News,” it appeared to showcase Vallas railing against lawlessness in Chicago and suggesting that there was a time when “no one would bat an eye” at fatal police shootings.

The video, which appeared authentic, was widely shared before the Vallas campaign denounced it as an AI-generated fake and the two-day old Twitter account that posted it disappeared. And while it’s impossible to say if it had any impact on Vallas’s loss against progressive Brandon Johnson, a former teacher and union organizer, it is a lower stakes glimpse at the high stakes AI deceptions that will potentially muddy the public discourse during the upcoming presidential election. And it raises a key question: How will platforms like Facebook and Twitter mitigate them?

That’s a daunting challenge. With no actual laws to regulate how AI can be used in political campaigns, it is on the platforms to determine what deep fakes users will see on their feeds, and right now, most are struggling to address how to self-regulate. “These are threats to our very democracies,” Hany Farid, an electrical engineering and computer science professor at UC Berkeley, told Forbes. “I don’t see the platforms taking this seriously.”

Right now, most of the biggest social media platforms don’t have specific policies related to AI-generated content, political or otherwise.

On Meta’s platforms Facebook and Instagram, when content is flagged as potential misinformation, third-party fact-checkers review it and are prompted to debunk “faked, manipulated or transformed audio, video, or photos”, independently of whether the content was manipulated through old-school photoshopping or AI generation tools, Meta spokesperson Kevin McAlister told Forbes.

Similarly, Reddit will continue to rely on its policies against content manipulation, which apply to “disinformation campaigns, falsified documents and deep fakes intended to mislead.” YouTube will also be removing election-related content that violates misinformation policies, which expressly prohibit images that have been technically manipulated to mislead users and may pose a serious risk or potential harm.

Leave a Reply

Your email address will not be published. Required fields are marked *