Deepfakes: can the Netherlands learn from Denmark?
On October 29, the Netherlands will go to the polls. Deepfakes are a growing problem.
Published on October 21, 2025

Our DATA+ expert, Elcke Vels, explores AI, cyber security, and Dutch innovation. Her "What if..." column imagines bold scenarios beyond the norm.
On October 29, the Netherlands will go to the polls. Many voters are trying to orient themselves in advance, but online it is increasingly difficult to assess what is real and what is fake. Deepfakes are a growing problem: with generative AI, videos and texts can be created or falsified at the touch of a button. This makes it easy to mislead voters. With concerns about fake videos growing, the Netherlands could potentially learn from a new approach in Denmark.
AI makes our lives easier and more efficient in many ways. It helps doctors with diagnoses, enables cars to drive themselves, and supports companies in their work. AI is popping up everywhere in society—from healthcare to the automotive sector. But the rapid rise of this technology also comes with risks. AI can cause problems, especially during elections. Deepfakes, lifelike fake videos created with AI, can mislead voters and undermine trust in information.
Deepfakes of Wilders, Van der Plas, and Jetten
In the run-up to the European elections in June 2024, a study by BNR showed that it is possible to make politicians appear in deepfakes. The editors used AI to create fake videos of Geert Wilders, Caroline van der Plas, and Rob Jetten and posted them on X, TikTok, and Meta's platforms. To their surprise, the videos were simply allowed. They slipped right through the filters that the tech companies had promised to use to block misleading AI content around the elections. The investigation shows how easy it is to create realistic fake videos with generative AI — and how vulnerable online platforms still are. Deepfakes of politicians can influence public debate, confuse voters, and even undermine confidence in elections.
External Content
This content is from youtube. To protect your privacy, it'ts not loaded until you accept.
Almost no one recognizes deepfakes
Deepfakes are a problem because people are almost unable to recognize them. Research shows that only 0.1% of people can tell the difference between real and fake images. In the iProov study, 2,000 people from the United Kingdom and the US were shown various photos and videos, some of which were real and others fake. Most participants fell for it. This shows how credible deepfakes have become.
No tsunami yet
It is clear that we must remain alert to deepfakes. Fortunately, research shows that the feared “AI tsunami” has not materialized, at least not during the 2024 elections. Experts predicted a wave of deception that would undermine democracy, but that scenario did not materialize.
Increasingly easier to detect
More and more techniques are also being developed to detect deepfakes. For example, the Netherlands Forensic Institute (NFI) has developed a method that looks at subtle color differences in the face. These are caused by the blood flow in small veins under the skin. In real videos, these shades change constantly, but in deepfakes, that pattern is missing. The veins around the eyes, forehead, and jaw are particularly easy to measure. This allows software to see the difference between real and fake footage. The technique is still being researched, but the initial results are promising. Zeno Geradts, forensic digital investigator at the NFI and professor at the University of Amsterdam, expects that this method will soon be ready for practical use. Such innovations are desperately needed because deepfakes are becoming increasingly realistic and difficult to detect.
Learning from other countries
The Netherlands can also learn from the approach taken by neighboring countries to limit the influence of deepfakes. In Denmark, a bill has been introduced that makes the creation and distribution of deepfakes a criminal offense. This law gives Danes copyright on their own bodies, facial features, and voices. The plan follows incidents involving manipulated videos of the Danish prime minister and pornographic deepfakes. Online platforms that do not comply with the rules risk heavy fines.
In the Netherlands, creating deepfakes is not yet illegal; only the removal of a fragment can be enforced if someone requests it. A stricter policy, such as in Denmark, could help limit the spread of misleading AI content around elections.
The House of Representatives has already discussed this topic: Members of Parliament asked questions about the possibility of giving people in the Netherlands copyright on their faces and voices, and making tech companies take more responsibility for removing unwanted deepfakes. It is still unclear whether and how a similar regulation could be introduced here, but it shows that the Netherlands is considering new ways to protect voters and citizens from misleading AI content.