small-logo
Need help now? Call 216.321.7774

Is Fighting Fake Videos Mission Impossible? No, But Tom Cruise Fakes Show Online Threats Rising

By Thom Fladung, Hennes Communications

Deepfake videos have been making news and shaking up people who worry about truth and fact for several years now. The recent release of Tom Cruise deepfakes on Tik Tok shows the technology has reached a new, disquieting level of realism.

Chris Ume, the visual effects artist behind the Cruise deepfakes, came forward in the last few days to claim responsibility – and urge the adoption of laws for responsible use of artificial intelligence and deepfakes, as in one of his first interviews with Fortune.

Meanwhile, the increasing realism and ease of creation mean deepfakes must be factored into crisis-scenario planning. We reached out to Scott Juba, Owner of Radar Public Relations and Consulting and a frequent partner to Hennes Communications in our work particularly with online reputation management. Scott talked about this increasing threat – and how to prepare for it.

 So, we’ve known about these realistic forgeries of people doing and saying things that never happened for at least a few years now. What jumps out at you about the Tom Cruise deepfakes?

The quality of the videos is what jumps out at me. And the real problem is that, over time, these videos are becoming easier and easier for people to create. To be clear, I don’t think this is something just anybody could do today. But some people already can, and the technology is rapidly advancing.

What are the implications you see right now from a reputation management and crisis communications perspective?

The negative ramifications of this could be enormous, especially for public figures who already have a lot of quality video footage available. In a video, a person could be made to commit a hate crime or say something truly awful. It would look so real that many people would believe it before it could be proven fake.

Do you think we can depend on the social media sites themselves – Facebook, Tik Tok, Instagram, etc. – to police and stop this? Don’t they have a big stake in doing so?

Social networks are working to build in deepfake detectors to flag such videos. However, we’ve seen how much trouble they’ve had with combating fake and misleading content. To solely rely on social networks to protect you or your organization against deepfake videos would be a mistake.

Some people find it counterintuitive to embrace social media – the very source of their problem. Can you talk more about why that works?

Staying off of social media aids the people creating the deepfake videos. It will be essential for brands and public figures to have well-established social media accounts. That will make it more difficult for someone to create a new account, impersonating the brand or person, and publish a deepfake video.

In the case of deepfake videos, I also would think it’s helpful to have plenty of legitimate videos of your leaders out there and available – to help make the fakes more obvious. Do you agree it’s a valid strategy to start or increase the use of videos with public speaking engagements?

The more high-quality video footage that exists of a person, the easier it will be to create a deepfake video of them. That does not mean, however, that people should bury their heads in the sand and avoid putting video footage of themselves online. Having a large number of legitimate videos will help to make the fakes easier to identify.

Another question or concern we hear a lot is how to know when to react. How do you know when an online reputation threat requires a response? How would you suggest doing threat assessment for deepfake videos?

Immediately employ social media monitoring to determine the following:

  • What is the influence or reach of the account that has published the video?
  • How much engagement (likes, shares, retweets) is the video generating? The more engagement the video is generating, the faster it will spread across social media.
  • Do people, in general, appear to be accepting the video as being true?

And can you imagine a situation where no response might be the best course?

Yes. If the answer to all the questions I mentioned previously is “no,” responding to a deepfake video could draw people’s attention to it rather than letting it fade away. Every situation is unique and must be handled accordingly.

 If a deepfake video is generating engagement and being accepted as true, what needs to be done?

Perception is reality. If people initially accept a deepfake video as being true, it likely won’t be enough just to say the video is fake and move on. You may be required to prove the video is fake. There are a number of ways that could be done, and tools are developing that can detect if a video is a deepfake. Counter.Social offers one such tool.

Thom Fladung is managing partner of Hennes Communications. Got a crisis brewing? Know that an issue is coming up? Wondering what to do about those critics on Facebook? Contact Thom at fladung@crisiscommunications.com or 216-213-5196.

 


Contact Us

Your name Organization name Describe your situation Your phone number Your email address
Leave this as it is