Georgia Bill Seeks to Ban AI Deepfakes in Elections, Make Releasing Them a Felony Crime

John Albers

A bill was filed last week that could see “deepfake” audio and images, including those created using artificial intelligence (AI) technology, made a felony in Georgia if they are used in the context of an election.

Georgia State Senator John Albers (R-Alpharetta) filed SB 392, which, according to its summary, would make it a criminal offense to use deepfake technology to interfere with an election.

The legislation would specifically make it a crime for a person to create, publish, broadcast, stream, or upload a deepfake “within 90 days of an election with the intent” to reduce “a candidate’s chance of being elected” or otherwise influence the results “of an election or referendum.”

Under the legislation, creating such a deepfake would be a felony offense and punishable between one and five years in prison and a fine of up to $50,000.

The federal government defines a deepfake as “a video, photo, or audio recording that seems real but has been manipulated AI,” which can “replicate faces, manipulate facial expressions, synthesize faces, and synthesize speech” to “depict someone appearing to say or do something that they never in fact said or did.”

CASE

Explaining his bill to the Atlanta Journal-Constitution, Albers predicted deepfakes will be made “in real time” during the 2024 election cycle “as we have never seen it before.” He warned, “If something’s not illegal, you better believe people on that moral and ethical edge are going to use that to their perverted advantage.”

There have already been at least two examples of deepfakes being used during the 2024 presidential election.

In June 2023, the presidential campaign for Florida Governor Ron DeSantis shared a video that appeared to show former President Donald Trump physically embracing Dr. Anthony Fauci, who was the director of the National Institute of Allergy and Infectious Diseases (NIAID) from 1984 until 2022.

Though two days passed before the media determined the imagery to be fake, The New York Times reported the DeSantis campaign argued the deception was obvious and drew comparisons to prolific Internet memes that mocked the Florida governor.

More recently, Democratic voters in New Hampshire received an automated phone call, purportedly featuring the voice of President Joe Biden, who urged them not to vote in the Republican primary. The New Hampshire Department of Justice called it an example of “voter suppression” in a press release that stated the “message appears to be artificially generated based on initial indications.”

The creator of the deepfake of Biden was later banned by the AI company behind the software used to create the audio, according to Bloomberg, which reported that the company’s policy strictly regulates political deepfakes and only allows them to be used for caricature parody or satire.

Last November, a professor at the University of Chicago warned in a white paper that AI and deepfake technology could be especially damaging if “released widely on social media or traditional media very close to the election when there is not enough time for responsible actors to figure out what’s true.”

– – –

Tom Pappert is the lead reporter for The Tennessee Star, and also reports for The Georgia Star News, The Virginia Star, and The Arizona Sun Times. Follow Tom on X/Twitter. Email tips to [email protected].

 

 

 

 

Related posts

Comments