Deepfake Democracy: AI’s Dark Grip on the Upcoming US Election in 2024

Deepfake

 

Did you know that some politicians are already blaming their gaffes on deepfakes?

Or that AI is now the campaign strategist that never sleeps?

From digital doppelgangers to AI-powered propaganda machines, we’re lifting the lid on how tech is turning elections into a high-stakes game of “Spot the Real Candidate.” So, grab your tin foil hats (kidding… maybe) and let’s explore how AI is reshaping democracy, for better or worse…Oh, and did I mention that a study predicts daily “AI attacks” by mid-2024?

Strap in as we navigate this brave new world of pixels and politics

What Are Deepfakes and How Do They Work? 

Deepfake

Deepfakes are AI-generated media that can make people appear to say or do things they never did.

These sophisticated fakes use machine learning algorithms to analyze and replicate a person’s face, voice, and mannerisms.

The process involves training AI models on large datasets of images and videos of the target individual. These models learn to generate new, realistic content featuring the person in fabricated scenarios.

1. Generative Adversarial Networks (GANs)

to create realistic synthetic media

2. Face-swapping algorithms

to superimpose faces onto different bodies

3. Voice synthesis

to clone and manipulate speech

The AI Campaign Revolution: Friend or Foe?

Electoral campaign

AI is proving to be a powerful tool for campaign strategies, but it’s not without its controversies. Let’s break down how this technology is changing the game

When Algorithms Become Political Strategists

Electoral campaign

First off, AI is revolutionizing data analysis. According to Eurac Research, AI can process massive amounts of information, including voter patterns, political speeches, news articles, and government performance reports.

This gives political parties unprecedented insights to shape their campaigns. 

Kevin Pérez-Allen

Kevin Pérez-Allen – United States of Care

But that’s not all. Kevin Pérez-Allen, chief communications officer at United States of Care, points out that AI is becoming a swiss army knife for campaign strategists.

Therefore it’s helping analyze voting patterns, craft targeted messages, and dissect social media habits.

With decades of experience in political campaign communications, Pérez-Allen has seen firsthand how technology has transformed campaigning.

Perhaps most surprisingly, AI is now entering the content creation arena. ChatGPT, for instance, is being used to produce first drafts of speeches and campaign materials. 

When Seeing Isn’t Believing

While AI brings some nifty perks to the campaign trail, it’s not all sunshine and rainbows.

In fact, this digital wonder child has a dark side that’s giving election integrity experts some serious heartburn.

So, we need to talk about the elephant in the room: deepfakes. These digital doppelgangers are set to be the boogeyman of the upcoming elections. I guess you don’t need to be reminded of what deepfake is, so let’s move on.

According to The Journalist’s Source, “AI and deepfakes will be firmly in the public consciousness as we go to the polls this year, with their increased prevalence supercharged by outsized media coverage on the topic.”

But here’s the kicker – it’s not just the actual deepfakes we need to worry about. The mere idea that they exist is enough to cause chaos.

You see, this fear of deepfakes is creating a perfect storm for manipulators. They’re exploiting what’s called the “liar’s dividend.”

What does that mean? Well, Bad actors can now claim that real, damning evidence against them is fake, and people might believe it.

For instance, The Journalist’s Source reports a case from April 2023 where an Indian politician tried to wriggle out of a sticky situation by claiming that authentic audio recordings of him were AI-generated. Talk about a get-out-of-jail-free card!

Better yet ! Deepfakes aren’t just a theoretical threat. They’re already making waves in U.S. politics. The American Bar Association reports some eye-opening incidents.

So, in 2023, deepfake technology was used to clone a Chicago mayoral candidate’s voice, making it seem like they were cool with police violence.

And if that wasn’t enough, the DeSantis campaign took a swing at Donald Trump with an AI-generated image showing Trump hugging Anthony Fauci – a move designed to rile up Trump’s base.

So, what does all this mean for voters? Well, distinguishing fact from fiction is becoming increasingly challenging. And that’s exactly what these manipulators are counting on.

Enhancing Campaigns or Manipulating Voters?

Let’s start with a stark warning from the academic world. A study by researchers at George Washington University predicts that by mid-2024, we’ll be facing daily “AI attacks” that could throw a wrench into the November general election.

But here’s the twist – the study’s lead author, Neil Johnson, told Al Jazeera that the biggest threat isn’t what you might expect.

According to Johnson, it’s not the obvious fakes we need to worry about most. Instead, he warns,

“It’s going to be nuanced images, changed images, not entirely fake information because fake information attracts the attention of disinformation checkers.”

In other words, the real danger lies in subtle manipulations that fly under the radar of fact-checkers.

Building on this point, Johnson adds another layer to the threat. He believes we’re on the brink of a disinformation tsunami, but it’s not what you might think.

“I do think that we’re going to be suddenly faced with a wave of [disinformation] — lots of things that are not fake, they’re not untrue, but they stretch the truth.”

This blurring of the lines between fact and fiction makes the challenge of identifying manipulation even more complex.

Digital Deception in Action: Recent Deepfake Incidents

Trump vs Biden

As we’re getting into the heart of AI-powered manipulation, we have to examine some real-world examples.

These case studies show us that the of deepfakes in politics isn’t just theoretical – it’s already here, and it’s evolving faster than you can imagine 

From Fake Robocalls to AI-Generated Attack Ads

Now, let’s look at a real-world example that brings these warnings to life.

On January 21, Patricia Gingrich, a New Hampshire voter, received a disturbing phone call. The voice on the other end, sounding remarkably like President Joe Biden, urged her not to vote in the upcoming presidential primary.

Gingrich, however, smelled a rat. As she told Al Jazeera,

“I knew Joe Biden would never say that.”

This incident wasn’t just a prank call – it was a deepfake, an AI-generated audio designed to mislead voters. It’s a perfect example of the kind of nuanced, not entirely fake information that Johnson warned about.

The voice sounded real, but the message was false, creating a dangerous mix of authenticity and deception.

  Even Experts are sounding the alarm that such deepfakes pose a high risk to US voters as we approach the November general election.

The danger isn’t just in the false content these deepfakes inject into the race, but in how they erode public trust.

Each fake call, each manipulated image, each AI-generated video chips away at our ability to distinguish truth from fiction.

Tech Giants Fight Back: Solutions to Combat Synthetic Media

The tech world isn’t sitting idly by while democracy gets digitally hijacked. they actually are rolling up their sleeves and getting to work on some pretty impressive solutions.

   Watermarks, Disclaimers, and AI Detection Tools

 Primarily, let’s talk about the big guns in the tech industry. At the Munich Security Conference, a who’s who of tech giants – we’re talking Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok – made a significant move.

Also, these tech behemoths committed to adopting a voluntary framework to tackle deepfakes designed to dupe voters. It’s like the Avengers of the tech world assembling to fight digital disinformation.

But what does this mean in practice? Indeed, instead of going for the nuclear option and banning deepfakes outright, these companies are taking a more nuanced approach.

Actually, they’re focusing on detection and labeling. Think of it as putting a big, flashing “CAUTION: AI-GENERATED CONTENT” sign on deepfakes. This strategy allows for transparency while preserving freedom of expression.

Now, let’s zoom in on some specific solutions. Companies like Attestiv are stepping up to the plate with deepfake detection technologies.

These tools are particularly effective for organizations like insurance companies and media outlets that need to verify the authenticity of content.

Furthermore,  Tech giants Google and Meta are taking it a step further. They’re implementing regulations that require politicians to come clean about their use of AI in election ads. 

And here’s the cherry on top: seven major tech companies, including OpenAI, Amazon, and Google, are working on incorporating “watermarks” into their AI-generated content.

Think of it as a digital fingerprint that says, “Hey, I’m AI-generated!” This could be a game-changer in helping voters distinguish between real and synthetic content.

The Legal Tug-of-War: Regulating AI in Elections

Let’s take a journey through time and across states to see how the legal framework has developed.

   Can Laws Keep Pace with Technology?

Our story begins in 1973, when Wisconsin took a pioneering step by prohibiting the publication of false representations of candidates or referendums to influence elections.

This early move laid the groundwork for future legislation, although it couldn’t have anticipated the AI-driven challenges we face today.

Fast forward to 2019, and we see states like California and Texas stepping up to address the growing threat of deepfakes. California‘s approach is particularly noteworthy.

They prohibit the publication of materially deceptive media intended to harm a candidate or deceive voters within 60 days of an election.

However, they provide a loophole: if the media includes a disclosure about its manipulated nature, it’s allowed.

This balanced approach aims to preserve free speech while promoting transparency.

Texas, on the other hand, took a more hardline stance. They outright banned the publication of deepfake videos aimed at harming candidates or influencing elections within 30 days of voting day.

Unlike California‘s civil penalties, Texas opted for criminal consequences, signaling the severity with which they view this issue.

Likewise, we see an acceleration in legislative efforts. In 2023, Michigan expanded on California’s model, extending the prohibition period to 90 days before an election and introducing both civil and criminal penalties.

Minnesota followed suit, specifically targeting deepfake media and requiring consent from depicted individuals.

The year 2024 marks a significant uptick in legislative activity. Florida, Indiana, New Mexico, Oregon, and Utah all introduced new laws or expanded existing ones.

A common thread among these recent laws is the requirement for disclaimers on AI-generated or synthetic media used in political advertising.

Interestingly, Utah’s approach stands out. Not only do they require disclaimers, but they’ve also made the use of artificial intelligence an ‘aggravating factor’ in sentencing.

This suggests a recognition of AI’s potential to amplify the harm caused by deceptive political practices.

Furthermore, a clear trend emerges. Lawmakers are increasingly recognizing the need for specific legislation to address AI-generated content in elections.

However, their approaches vary, from outright bans to disclosure requirements, and from civil to criminal penalties.

Conclusion 

As we wrap up this wild ride through the AI-powered political landscape, one thing’s clear: the game has changed, and there’s no putting this genie back in the bottle.

From campaign strategies on steroids to deepfakes that make your head spin, AI is reshaping how we vote, campaign, and even trust what we see and hear.

But don’t throw in the towel just yet. With tech giants stepping up, laws evolving, and voters getting savvier, there’s hope for democracy in the digital age.

So keep your BS detector charged, your critical thinking hat on, and remember: in the world of AI and politics, seeing isn’t always believing.

Now go forth and vote wisely, you digitally-empowered citizens!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top