The Biggest Dangers
of Deepfakes

Ferdinand Heimbach
Hide avatar

By nature, humans are skeptical about new things. In the case of deepfakes, this is perfectly justified – they are currently used mostly used in pornography.

Deepfakes polarize. They have the potential to cause a shitstorm on social media in the shortest possible time. So far, policymakers have failed to impose clear legal boundaries on deepfakes. That is why right now the technology is developing unchecked and without clear regulations. Deepfakes come with various risks.

Influencing democracy

Imagine Angela Merkel stating in a public speech that she thinks Putin is an idiot. Such statements could not only lead to a state crisis but also have serious consequences for international relations. Military decision-makers could utter sentences that, in the worst case, lead to war.  

Admittedly, a harsh example. Deepfakes could also have a big impact on a local level. When deepfake technology puts words into a politician's mouth that they have never said,  the unrestricted formation of opinion can be influenced. Voters could be misled and political parties might enforce their agenda illegally.

Mark Coeckelbergh is a professor of philosophy at the University of Vienna and devotes himself to the study of technology in general and artificial intelligence in particular. He says:

Technology philosopher Marc Coeckelbergh deals with AI and morality

Deepfake detection is too slow

Many deepfakes can currently be recognized with the naked eye. Detection software can expose deepfakes, too, thus protecting society from misinformation. However: In a few years, deepfake technology will be much more sophisticated and might fool us more easily. To detect deepfakes, existing technologies are analyzed by detection software. However, if a new type of deepfakee merges, software must first be trained to detect it. A cat-and-mouse game is a logical consequence.

Dominik Kovacs' company Defudger develops software that detects deepfakes

The goal is therefore to design detective algorithms that are so sophisticated that the production of new undetectable Deepfake technology becomes an enormous effort – and simply will be too difficult and time-consuming.

Spreading misinformation

Fake news polarize. Controversial information spreads like a wildfire on social media. The problem: Platforms rarely review the information before it is posted. This enables everyone to circulate misinformation with hardly any restriction.

Hao Li is one of the most renowned Deepfake experts

Pornographic content

Pornographic videos can not only negatively affect the careers of famous personalities but also private individuals. On the Internet, graphic deepfake videos of victims that look deceptively real might be spread. There are many reasons why people might create such harmful videos. The Konrad Adenauer Foundation writes:

"It can be about harming someone or taking revenge. It is also known that money can be extorted with fake videos. "

Again, the more sophisticated the technique becomes, the harder it will be to detect fake videos and proof that something isn’t actually you.  

The legal situation is unclear

Deepfakes are still a new technology. That is why legislation is still vague. However, this is not an extralegal space: fake photographs have been circulating for several decades. The Konrad Adenauer Foundation writes:

"Until now, Germany has lacked legal regulations that draw a line between the permissible processing of videos and their dissemination and inadmissible deceptions."

Thus, the legislators must address this issue. Other countries, such as Australia, have already enacted legislation specifically targeting pornographic deepfakes. Since 2018, perpetrators can face up to several years in prison. On April 21 the European Commission published a legal framework for a possible regulation of artificial intelligence. Different types of AI ​​would be divided into different risk categories and regulated accordingly. Deepfakes per se are categorized as “limited risk” technology inorder to enable positive use cases. However, dectection software is seen as "high risk", as artificial intelligence is left to decide on its own whether a video is real or whether it has been manipulated.

Kathryn Harrison is an expert on disinformation in the digital space

More Stories

Terms and conditions