The use of artificial intelligence to generate sexually explicit “deepfake” imagery has prompted significant public concern, regulatory scrutiny and legal reform. Recent controversies involving the use of AI tools to create sexualised images of women and children have highlighted serious gaps in existing legal protections.
This article examines the UK’s current legal framework, the proposed legislative response, and the options presently available to victims of AI-generated sexual deepfake abuse.
What is the problem?
Advances in generative AI have made it increasingly easy to create highly realistic sexualised images of identifiable individuals without their consent. In many cases, users can upload or reference publicly available photographs and instruct an AI tool to produce pornographic or sexually explicit images depicting the subject.
While not all AI tools permit this functionality, the Grok AI tool integrated into the X platform, has been widely criticised for enabling users to generate realistic sexual images and videos of real people with minimal safeguards.
An analysis by AI Forensics of images generated by Grok found 53% contained individuals in minimal attire of which 81% presented as women and 2% depicted persons appearing to be 18 years or younger. [i]
Bearing in mind one in three women report having experienced online abuse, the potential for deepfake technology to be weaponised against women and girls is significant. [ii]
What has the response been?
Government responses internationally have been uneven, reflecting the challenge of regulating rapidly developing technologies.
Most states have expressed an intention to investigate and legislate on the issue, while others such as Malaysia and Indonesia, have attempted to act more swiftly and decisively by blocking access to Grok entirely.[iii]
However, the ease with which geoblocking measures can be circumvented, including through the use of VPNs, demonstrates that platform-level restrictions alone are unlikely to provide effective protection. A multi-layered response is required, combining criminal law reform, regulatory enforcement, and platform accountability.
What protection does UK law currently offer?
Images of children
It is already a criminal offence to create, possess or distribute indecent images of children. This includes both photographs and “pseudo-photographs”, defined as images which appear to be photographs and include those generated by computer graphics.[iv] AI-generated sexual images of children therefore fall squarely within existing child sexual abuse material offences.
Images of adults
The Online Safety Act 2023 inserted sections 66A and B of the Sexual Offences Act 2003 to criminalise sharing, or threatening to share, intimate images of adults, including deepfake images, without consent or a reasonable belief in consent.
However, until recently, the creation of sexual deepfakes of adults (without sharing them) fell outside the scope of criminal liability.
The Data (Use and Access) Act 2025
Section 138 of the Data (Use and Access) Act 2025 extends criminal liability by creating an offence of making, or requesting the making of, a “purported intimate image” of an adult without consent or a reasonable belief in consent. This provision directly addresses AI-generated sexual deepfakes.
At the time of writing, section 138 has been enacted but is not yet in force. Its commencement requires secondary legislation.
The Government has publicly committed to criminalising the creation of sexually explicit deepfake images and the installation of equipment intended to facilitate such offences.[v] What remains unclear is whether this will be achieved by bringing section 138 into force via a commencement order, or by introducing a new statutory instrument creating parallel offences.
Other relevant offences
Section 63 of the Criminal Justice and Immigration Act 2008 criminalises the possession of extreme pornographic images, and it is an offence under section 37 of the Criminal Justice and Courts Act 2015 to possess pornographic images of rape and assault by penetration.
There are also circumstances in which the use of AI to generate sexual deepfakes would be capable of constituting harassment under sections 2 or 4 of the Protection from Harassment Act 1997, though a course of conduct is required for the offence to be made out.
Regulatory action: Ofcom
Alongside legislative reform, the UK’s media and online safety regulator, Ofcom, has launched a formal investigation into X under the Online Safety Act 2023. That Act imposes statutory duties on platforms to assess and mitigate the risk of users encountering illegal and harmful content.
Ofcom’s investigation will consider whether X failed to take proportionate steps to prevent the creation and dissemination of non-consensual sexual imagery. Depending on its findings, Ofcom has the power to impose significant financial penalties and, in extreme cases, seek measures restricting access to services in the UK.
The scope and timeframe of any enforcement action remains uncertain.
Remedies available to victims
Although specific provisions are still coming into force, victims of sexualised AI-generated deepfakes still have options.
The first recourse is to report cases to police relying on existing law. However, even where criminal offences are available, the complexity of investigations and prosecutions requiring specialist (and sometimes expert) knowledge, as well as the need for the cooperation of the platforms upon which such images are shared (many of which are in different jurisdictions), which can present a significant barrier to charges being bought.
A Takedown Notice is often a more immediate remedy for effecting the removal of sexualised deepfakes, and free digital removal tools (like stopNCII.org, or the Report Remove service), use hashing to create a digital fingerprint of the image in question, allowing participating sites to detect and block the image automatically without anyone actually viewing the file.
Victims may also wish to consider the availability of protective orders (such as Non-Molestation Orders, Stalking Protection Orders, or Domestic Abuse Protection Orders) to strengthen the response to any future generation or dissemination of AI-generated sexualised deepfakes.
Once immediate action has been taken to remove the image, instigate a prosecution, and / or prevent repeat offending, victims may also wish to consider a civil action against the perpetrator for misuse of private information and challenge any refusal to prosecute by the police or Crown Prosecution Service (CPS) under the Victims’ Right to Review.
Conclusion
The proliferation of AI-generated sexual deepfakes has exposed the capacity of emerging technologies to facilitate serious forms of abuse, particularly against women and children. While existing criminal law provides protection in relation to images of children, legal protections for adults have lagged behind technological developments.
The Government’s stated intention to criminalise the creation of non-consensual sexual deepfakes is a welcome development. However, the effectiveness of this response will depend on the timely commencement of legislation, meaningful regulatory enforcement, and the willingness of platforms to implement robust safeguards.
As AI technology continues to evolve, the challenge for lawmakers and regulators will be to ensure that legal protections remain responsive, enforceable and centred on the rights and dignity of those affected.
This article was written by Karlia Lykourgou and Harriet Johnson, barristers at Doughty Street Chambers.
[i] https://aiforensics.org/uploads/GrokUnleashed.pdf
[ii] https://refuge.org.uk/wp-content/uploads/2021/10/Unsocial-Spaces-for-web.pdf
[iii] https://www.theguardian.com/technology/2026/jan/18/grok-x-ai-tool-still-accessible-malaysia-despite-ban-vpns
[iv] Section 1 Protection of Children Act 1978.
[v] https://www.gov.uk/government/news/government-crackdown-on-explicit-deepfakes

/Passle/5b3f2cb9780ebf0410d034b3/MediaLibrary/Images/2026-01-16-07-34-56-936-6969ea203f3115dc87bcc19c.png)
/Passle/5b3f2cb9780ebf0410d034b3/SearchServiceImages/2026-01-04-17-36-34-033-695aa522ce1a1c7b4209e0c9.jpg)
/Passle/5b3f2cb9780ebf0410d034b3/MediaLibrary/Images/2025-11-03-09-20-48-079-690873f01f19d47f0057ea86.png)