Mid-April, Brussels – while the corona pandemic in Germany is reaching a new high and the governments are imposing a curfew, restrictions of a different kind are being discussed in the political center of Europe. The European Commission is publishing a new proposal for dealing with artificial intelligence. It aims to set a firm framework in which AI applications can flourish and dangerous uses are prevented. For this purpose, the EU Commission divides technologies into different risk categories. For example, there are unacceptable risks or a high-risk group. Creating deepfakes falls into the "limited risks" category, which means that as soon as a video is created, it must also be declared as a deepfake.
"We think it's important that people know it's a fake," said Johannes Bahrke, spokesperson for the EU Commission on digital issues, explaining the transparency requirement. It's about the possibility of users thinking: "there really is a person making these statements." The context in which the deepfake is made is crucial. Satire? Okay. Personal attacks? Not okay.
The question of context exemplifies the "struggle" for truth. Deepfake technology has so many different ways of manipulating supposedly certain truths – such as videos or, more recently, satellite images. Someone who perceives a deepfake of Tom Cruise as a joke sees a different truth in it than a Scientology follower watching the same clip. Truth results from context – and that context is flexible, especially in the digital space.
Tech philosopher Mark Coeckelbergh was a member of an expert panel of the European Commission on the ethical handling of AI. "I don't think it's a problem that people on social media, produce videos to bring humor into our lives. It's like photoshopping things to make memes. But there's a problem when those videos are meant to play a role in different contexts. And I'm thinking especially of political contexts here, where whole elections could be influenced by manipulated content,“ he says.
Deepfake technology raises all sorts of ethical questions. Who decides whether the Barack Obama video is political or satire? What are the limits of free speech? Questions that have tremendous urgency outside the world of deepfakes. "I think with deepfakes, we're really opening up Pandora's box of ethical challenges for regulators and judges in the legal system to deal with." A Pandora's box of human problems.
Johannes Bahrke also emphasizes, "We don't want to regulate a technology, we want to regulate its use." He says it is important to remember that we are not moving in a lawless space in Europe.
A legal framework already exists. Attorney Christian Solmecke is familiar with the legal situation in Germany. More than 750,000 people have subscribed to his YouTube channel. "Legally, we actually have all the legal tools you need to deal with such deepfakes," Solmecke explains.
But what if deepfakes become part of our everyday life? What if social media is filled with deepfake memes? Society will adapt, says Victor Riparbelli, CEO and co-founder at Synthesia. Synthesia uses deepfake technology to turn text into videos. That's useful for educational purposes, for example. For Riparbelli, the path is clear: "It will be a slow cultural adoption. It's going to be grassroots-driven, it's not going to be the Hollywood studios or the big news television networks that are going to pick up this technology. It's going to be young people using the technology and doing things that no one thought were possible." Deepfakes, he says, serve as a multiplier for human creativity.
Technology is always human, Mark Coeckelbergh also says. "It's humans who make it, and it's humans who use it. It's also the very human issues that come up in these ethical discussions." Discussions that sometimes leave more to be desired.