Navigating the Rise of Deepfakes: How to Spot AI-Generated Content in Crisis PR

 
 

In today’s media landscape, AI-generated content is presenting one of the most challenging new dilemmas in crisis PR: Is this even real? With deepfake technology becoming increasingly sophisticated, the line between truth and fabrication is blurring, creating complex issues for public figures, brands, and the PR professionals who support them. Even a single convincing fake video can have severe consequences.

I recently came across a viral video of Donald Trump discussing “gender insanity” that’s been circulating widely across social media, racking up millions of views. I think this video could be AI – I’m fairly sure of it – but without official confirmation, I can’t say for certain.

So, instead, let’s use this as a case study to outline the steps I’d take to assess whether a video is authentic or AI-generated if this were for a client.

Step-by-Step Checklist for Spotting AI-Generated Content

Look for Official Sources

Check official channels: If a public figure makes a statement as controversial as this, it would usually be found on their verified social media accounts or official website. In this case, there’s no record of this video on any of Trump’s official channels, which is a significant red flag that it might not be authentic.

Media coverage: Major statements from high-profile figures like Trump tend to lead to widespread news coverage. Yet, there’s little to no coverage of this specific video on credible news sites, another indicator that it could be AI-generated or manipulated.

Assess the Video Quality

Visual anomalies: Deepfake videos often have subtle, unnatural indicators, such as odd blinking patterns, stiff facial expressions, or poorly aligned mouth movements. In this video, Trump’s blinking seems unusually slow and inconsistent, and his eyes appear darker, which is often characteristic of AI-generated faces. The lip-syncing also seems slightly off, suggesting possible manipulation.

Audio quality and tone: AI-generated voices sometimes sound flat, lacking natural inflection and flow. In this clip, Trump’s speech is continuous, with barely any pauses or change in intonation. There’s a robotic steadiness to the tone, with none of his usual emphatic style or shifts in pitch. This flat, unchanging intonation is often a telltale sign of synthetic audio.

Check for Contextual Clues

Length of the video: An unusually long video, like an 11-hour clip, is often a clue for deepfakes. This lengthy stream format is sometimes used to prevent close scrutiny, especially if the video is looped or distorted to make a detailed analysis difficult.

Inconsistent content: AI videos often splice together phrases from previous speeches or public appearances. This video includes phrases that sound familiar, suggesting it may have been pieced together from different statements Trump has made in the past. This “cut-and-paste” approach is another marker of AI manipulation.

Analyse Public Reactions and Comments

Social media comments: The public often notices small, unnatural elements in manipulated videos. In this case, many commenters seem sceptical, with remarks like “This seems off” and “Is this even real?” Screenshots of these comments highlight the public’s instinctive doubt, suggesting that others also sense something unusual.

Side-by-side comparison: Posting a side-by-side comparison with a verified Trump video can help highlight these differences. Here, a comparison reveals subtle discrepancies in facial expressions, tone, and eye contact – areas where deepfakes often fall short. His eyes in the suspected video seem darker and less expressive, lacking the natural “spark” seen in authentic footage.

 
 

Consider Cultural Context

Relevance of the content: Certain statements may be controversial in one cultural context but not in another. This video’s content seems highly provocative, more so than most official statements Trump has made, suggesting it might be engineered for shock value rather than being a legitimate view. The out-of-character nature of the statement is a clue that it might be fabricated for effect.

Compare with Known Footage

Side-by-side analysis: Directly comparing this video with known, authentic footage can reveal discrepancies. In this case, examining a genuine video side-by-side shows subtle differences in mannerisms, facial movements, and even eye colouration. These inconsistencies often signal that the content has been artificially manipulated.

As you watch the video: What do you think? Do you think this is a deepfake?

Why Social Media Platforms Need a System to Flag Potential Deepfakes

The viral spread of this Trump “gender insanity” video demonstrates why social media platforms like YouTube, Instagram, and Facebook need robust systems to flag potential deepfakes. These platforms are often the first places people turn to for news, and when a questionable video gains millions of views, it’s critical to have safeguards in place.

A Proposed Warning System

If platforms used advanced AI-detection tools to scan videos for signs of manipulation, they could then issue a visible warning label on flagged content. This warning would alert viewers that the video might be AI-generated or digitally altered, prompting viewers to approach it with caution. A warning label could also discourage users from resharing potentially misleading content without verifying its authenticity.

Why Warnings Matter

A visible warning system could drastically reduce the spread of misinformation, helping protect public figures, brands, and audiences from the potentially harmful effects of deepfakes. When users are alerted that content might be manipulated, they are more likely to question its authenticity and less likely to share it impulsively. Such transparency is increasingly necessary in an era where one false video can spark fear, controversy, and unnecessary backlash.

The Dangers of AI-Generated Content in Crisis PR

This Trump “gender insanity” video, whether AI-generated or not, highlights the growing dangers of deepfake content in the world of crisis PR – a threat we’ve seen before with other public figures. Many deepfakes featuring celebrities and political leaders have surfaced, from manipulated videos of Mark Zuckerberg seemingly boasting about controlling people’s data, to altered footage of Barack Obama appearing to make inflammatory statements, to even Tom Cruise engaging in bizarre stunts. These AI-generated videos exploit recognisable voices and faces, making it difficult for viewers to discern what’s real from what’s fabricated.

In a world where anyone’s likeness can be realistically manipulated, deepfakes are becoming a significant challenge for public figures, who now face the risk of misinformation and reputational damage on an unprecedented scale.

Here’s why it’s crucial to be vigilant:

Erosion of Trust: When fake videos circulate widely, they undermine public trust. People begin questioning even legitimate content, which can lead to scepticism and cynicism. For public figures and brands, it’s a steep uphill battle to maintain credibility when audiences are primed to doubt everything.

Spread of Fear and Misinformation: A provocative AI-generated video can easily spark outrage and spread misinformation across social media, where it can gain millions of views in hours. People start to believe fabricated narratives, which can lead to real-life consequences, from fear and anger to reputational damage.

Challenges for Crisis PR: Handling a real crisis is difficult enough, but now we’re also faced with the challenge of verifying if a crisis even exists. For PR professionals, the dilemma has shifted from damage control to determining whether there’s a genuine issue to address. This extra layer of complexity makes crisis management both time-consuming and critical to handle correctly.

Crisis PR’s New Reality: Navigating Deepfakes

Deepfakes and AI-generated content are fundamentally changing the landscape of public relations. In crisis PR, we’re learning that our first responsibility might not always be managing a crisis – it might be verifying if the crisis is real in the first place.

So here’s my (slightly cautious) takeaway: I think this video is AI, but if I’m wrong, I’m willing to look a bit silly… Wouldn’t be the first time I got one wrong... They’re so bloody realistic now! I’ll be curious to see if this actually turns out to be AI or not because, honestly, I find the whole thing creepy. But the fact that we even need to ask this question speaks volumes about the impact of AI in media.

As deepfake technology becomes more accessible, the need for critical analysis, scepticism, and verification is greater than ever. And in the meantime, as PR professionals, we’re adapting to this new reality, helping our clients navigate the complexities of public perception in an era where even reality itself is up for debate.

Previous
Previous

The Rise of Instant Outrage: Why Social Media Fuels Snap Judgments and PR Disasters

Next
Next

Crisis PR Without the Circus: Why Privacy Comes First