At first glance, they are images and videos that may look or sound real. But they’re not, and they’re becoming increasingly easy to create.
Experts say the growing prevalence of fabricated material created with artificial intelligence could pose serious risks in the upcoming November general election and elections across the globe.
“There’s going to be 63 countries around the world involving 2.6 billion people covered by national elections [in 2024],” said Al Tompkins with the Poynter Institute, a nonprofit that provides fact-checking, and ethics training to citizens and journalists. “When you get that many countries that are up for grabs for disinformation, you can start influencing global politics, global policies, global security.”
In a newly published report, researchers with the nonprofit Center for Countering Digital Hate used several popular AI programs to generate images of President Biden in a hospital bed, former President Donald Trump sitting in a jail cell, and armed militia members outside of polling places. The exercise, the researchers said, demonstrated how quickly and easily these realistic deepfakes can be created, even in programs that claim to have rules prohibiting content of this kind.
The report concludes, among other things, AI and social media platforms must do more to prevent users from generating and sharing misleading content about geopolitical events, candidates for office, elections, or public figures.
Ahead of the New Hampshire primary on Jan. 23, a robocall impersonating President Joe Biden’s voice spread disinformation by telling people not to vote in the election.
As the 2024 election ramps up, there has been growing concern about AI’s use in political or election material. Lawmakers and the Federal Election Commission (FEC) have considered rules for the use of AI in political advertising, but there currently are no federal laws against it.
In Florida, Gov. Ron DeSantis just signed legislation mandating political candidates to disclose if they use AI in their ads or other communication with voters. At least one watching dog group has called the law weak and inadequate.
With generative AI becoming more prevalent ahead of the presidential election, VERIFY is here to help distinguish what’s real and what’s not.
THE SOURCES
- Al Tompkins, Poynter Institute
- Federal Election Commission (FEC)
- Federal Communications Commission (FCC)
- Center for Countering Digital Hate
- Siwei Lyu, Ph.D, director of the UB Media Forensic Lab at SUNY Buffalo
- White House Press Secretary Karine Jean-Pierre
- The New Hampshire Attorney General’s Office
- InVid and RevEye, footage forensics tools
- “Beat Biden,” an ad released by the Republican Party
- “Who’s More Woke,” a radio ad released by Courageous Conservatives PAC
- “Trump Attacks Iowa,” an ad released by the Never Back Down PAC
WHAT WE FOUND
If you see a political ad and are wondering if it was made using AI, here are some tips to spot red flags:
1. Disclaimer
Some social media platforms now require political advertisements that use generative AI to have labels or disclaimers that the video was created using generative artificial intelligence technologies. This could be in the fine print, small text, audio narration, or a credit in the video or image.
This is one of the first things we look for when fact-checking an image or video. Is there a clear disclaimer that the video was created with AI? Was it shared from a social media or online account whose bio states that they create AI content?
VERIFY found a disclaimer in small text on this political ad released in April 2023 by the Republican party, titled “Beat Biden.” In the top left-hand corner of the video, a disclaimer says: “Built entirely with AI imagery.” The text is hard to see, but it's there.
Not all generative AI will include a disclaimer and it’s not always legally required.
2. Details
Generative AI technologies can struggle with getting certain details right, like facial features, human body shapes, patterns and textures, the Poynter Institute’s Al Tompkins told VERIFY. For instance, hands or fingers may appear misshapen or distorted and skin might look too smooth.
In the “Beat Biden” ad, Biden is seen standing with his arm around Vice President Kamala Harris.
While it does generally look like Biden, his mouth is distorted and out of proportion. Later in the video, a crowd of people can be seen standing next to a building. Some of the people in the crowd are misshapen, for example, one man’s head is stretched. Both of these are signs it’s AI-generated.
3. Audio
AI-generated audio can be particularly concerning because it's harder to spot AI markers in sound, like in the case of radio ads or phone calls.
But, there are ways to tell if audio in an advertisement is AI. First, listen for the tone and pace of the voices. Do they sound too robotic? Is there little to no tone or inflection? Is there breathing? When people talk, you can typically hear breaths and other sounds.
Ahead of the Jan. 23 New Hampshire primary, an audio recording of a robocall was obtained by various news outlets that sounded like President Joe Biden urging people not to vote in the primary.
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the audio says.
Siwei Lyu, Ph.D, director of the UB Media Forensic Lab at SUNY Buffalo and deepfake expert, told VERIFY in an email that after he analyzed the audio, he found a lack of intonation variations that are present in Biden’s actual voice.
White House Press Secretary Karine Jean-Pierre also confirmed the audio was fake.
In a political ad titled “Trump Attacks Iowa” from pro-Ron DeSantis PAC Never Back Down, we VERIFIED AI-generated audio was used for a portion of the ad with audio that sounds like Trump’s voice.
The audio that is supposedly Trump sounds robotic, and there are no pauses between the words or sentences. At the end of the quote, the enunciation of the word “her” is inconsistent with the rest of the audio in the recording. Those are key signs the audio was AI-generated.
4. Reverse search
When VERIFY fact checks images or video, we conduct a reverse image search to determine if it has existed previously on the internet, if it’s been edited from a previous version or is being shared out of context. For example, an old tornado photo may be shared out of context in relation to a current weather event. But if a photo or video is created with generative AI, there often won’t be a match when we reverse search. This is a red flag that it could be created with AI.
To conduct a reverse image search through Google from your computer, follow these steps:
- Right-click on the photo in your Chrome browser and select “Search image with Google.” The results page will show you other places the image may have appeared.
- You can also go to images.google.com, click on the camera icon, and upload an image that you’ve saved to your computer. This should also show a results page.
Google also has guides to reverse image searching from your iPhone or Android device.
RevEye is a browser extension you can install that also allows you to conduct a reverse image search by right-clicking any image online.
A website called TinEye also allows you to upload an image from your computer to find where it has appeared online.
When we fact-checked the political ad that appeared to show images of Trump hugging Fauci, we conducted a reverse image search.
We found no instances of the two embracing before the ad aired. Given the constant presence of media at public events for the president – and especially for appearances alongside Fauci during the COVID-19 pandemic – any such hug would have been photographed and widely published.
5. Check the context
As with anything you might see on social media or television, or hear on the radio, you should always check the context. This includes looking into the organization that made the ad and learning more about the candidates or issues that might be the subject of the ad. This is helpful guidance for anything you see – not just AI-generated content.
When we say check the context, that includes but isn’t limited to:
- Do a gut check. Does what you're seeing or hearing make sense? If it feels off, do an internet search of the topic and the person.
- Look for reputable articles, resources or primary sources that back up any claim.
- Check for any official statements from the subject of the content in question.
- Look for pieces of information or details that are inconsistent with others that you have read or seen.
- Look for fact-checks of the content from fact-checking sites such as VERIFY.
Generative artificial intelligence technologies are improving rapidly, making detection more difficult. While there are AI detection tools available online, they aren’t 100% reliable. Some of the tools that can be helpful include Deepware AI’s scanner or AI Voice Detector, but they should only be used to help spot red flags and not as the only way to verify whether something is real.
If you have questions about AI, or if you think something was created with AI and want VERIFY to check it out, email us at VERIFY@10TampaBay.com or text 727-577-8522.
The Associated Press contributed to this report.