Skip to content Skip to footer

The Use of Deepfakes and the Danger it Poses

Recently I wrote about the roots of deepfake technology and its consequences, in a Forbes Technology Council article.

Deepfakes are based on AI and alter real video content or images to create fake ones. Some examples include super-imposing someone’s face on a celebrity’s body. This distortion of media content, however, is already causing quite a stir. Consider the CNN reporter who was the victim of a doctored video that was made to look like he was shoving a press conference facilitator, or a video of House Speaker Nancy Pelosi who was made to appear drunk and slurring her words.

These types of deepfakes can harm reputations, but they also can have even more catastrophic consequences. Imagine an edited video made to sound like a world leader declaring war; or verbally attacking a foreign country.

As I shared in the Forbes article, the same AI technology that has enabled deepfakes can be the antidote to eliminate them. Already, large tech firm such as Facebook and Microsoft have taken initiatives to create technology that can detect and remove them, by amassing giant databases to train algorithms on how to sniff them out.

How to Spot a Deepfake

As amateur “deepfakers” jump on the bandwagon, creating what is coming to be known as cheap fakes, there are tell-tale signs that what you are seeing or hearing is not real. Take for example, people that do not blink in videos, or shadows don’t look like they are falling in the right places. Other sure signs of a fake can be faces that don’t fit the body or those that appear to blur. It is useful to have both good and poor deepfakes for training algorithms in order to make them as comprehensive as possible.

It’s Time for Government to Step In

Despite these efforts, AI alone won’t be able to eliminate the deepfake problem – it also requires government intervention. Last June, congress held its first-ever hearing on deepfakes, during which the idea of making social media firms liable for the damages caused by deepfakes was discussed. This may be a step in the right direction, since currently there are no impunity laws in place to govern the spread of deepfakes. There needs to be some liability for those that create and share them.

There’s no doubt that advanced technologies are doing great things, but they also have the power to cause harm. To change the course of deepfakes, applying AI technology to sniff them out, while establishing more regulatory control of social media platforms, just may help put a stop to the deceit, lies and danger they pose.

Sign up for our monthly newsletter:

Deepfakes: A Serious Cause for Concern | Wovenware Blog

Get the best blog stories in your inbox!