Governments in Europe are moving to tighten regulations on artificial intelligence tools that create sexually explicit deepfakes, particularly so-called “nudification” technologies that generate fake nude images of real people without their consent. Lawmakers say the technology poses serious risks, especially for women, teenagers and children who can become victims of non-consensual images shared online.
Under new legislation being considered in the United Kingdom, the use of AI tools to generate certain images may result in serious consequences, including the possibility of imprisonment. Research cited by advocacy groups shows many teenagers are exposed to explicit content at a young age, often unintentionally through smartphones or social media. Critics say the rise of generative AI has made it easier than ever to produce and distribute explicit images, including fabricated material that can target real individuals.
British lawmakers are also pushing for stronger rules for adult websites. Proposed amendments to the Crime and Policing Bill include requirements for platforms to verify performers’ ages, confirm consent for uploaded material and restrict content that depicts violence or actors pretending to be underage. Some lawmakers have also called for banning so-called “step-incest” scenarios, arguing they can normalize harmful behavior.
New laws will ban AI ‘nudification’ tools that use generative AI to create fake nude images without consent.
Anyone who uses these tools will face time behind bars.
— Home Office (@ukhomeoffice) March 13, 2026
The push for stricter controls comes as governments across Europe examine how artificial intelligence is being used to generate sexualized deepfakes. European Union regulators are considering amendments to the bloc’s AI legislation that would explicitly prohibit systems designed to produce child sexual abuse material or sexualized deepfakes of real people.
Regulators in several countries are also investigating AI-generated sexual content linked to new chatbot technologies. Lawmakers emphasize the need for stricter oversight to prevent rapidly advancing AI systems from facilitating harassment, exploitation, or abuse in online spaces. Advocates for the proposed measures believe that technology companies and adult-content platforms should take greater responsibility in moderating harmful material and restricting minors’ access to explicit content. Although implementing these regulations on a global scale presents significant challenges, policymakers stress that collaborative international efforts will be crucial to addressing this escalating issue.
Also Read: Australia’s AI Chatbot Age Filters Take Effect, Aimed at Protecting Minors Online



