Deepfakes or Deception. The Law on AI Generated Images
AI is seen by the UK Government as systems that are adaptable and autonomous. In the Justice system, AI is used as an umbrella term in reference to machine models/learning regarding data processing, making inferences, recommendations or decisions that are usually made by humans, which can also influence physical or virtual environments. We most commonly recognise these as ChatGPT and OpenAI.
What are deepfakes and why do they matter?
Deepfakes on the other hand is recognised as audio-visual content that has been manipulated or AI-generated content that misrepresents someone or something. With deepfakes, we have often seen how speech, movement, facial features have been modified in such a smooth transition that it is often difficult to distinguish between what is real and what has been generated artificially.
Let’s consider the numerous images that have gone viral online over the last few years, including a series of TikTok videos that appeared to show actor Tom Cruise performing magic tricks, and playing golf. Or a fake Ukrainian president Volodymyr Zelenskyy surrendering to Russia, to the current trend of merging current images with your younger self and the two versions of yourself hugging. These are all fake. Technology has become considerably advanced and increasingly accessible, that CEO’s are being tricked out of millions. Such content can cause harm in many ways, reputational damage, emotional distress, fraud, political manipulation, non-consensual intimate images and more. The government is responding with new legal tools to address these risks.
Key legal changes: Online Safety Act 2023
The Online Safety Act 2023 introduced various new communication offences such as:
- S.179 False Communications – sending messages you know to be false, with intent to cause non-trivial psychological or physical harm to the likely audience, with no reasonable excuse for sending the message
- S. 187 Sending etc photograph or film of genitals also referred to as “cyberflashing” – sending photographs or films of genitals without consent. This new offence is also inserted in s.66A of the Sexual Offences Act 2003
- S.188 Sharing or threatening to share intimate photograph or file, aka revenge porn – three new offences of sharing an intimate photograph or film, and one new offence of threatening to share an intimate photograph or film, have been created. These offences are created by means of inserting three new sections (66B, 66C and 66D) into the Sexual Offences Act 2003 and deal with sharing intimate images without consent.
The Online Safety Act has also made further amendments to the Sexual Offences Act to cover intimate images or films, which appear to show a person in an intimate state, therefore synthetic images, including deepfakes now fall within the realm of an offence under these Acts. In 2025, the Ministry of Justice have addressed sexually explicit deepfakes with vigour. There has been the amendment of the Criminal Justice Bill which criminalises the creation of deepfakes, this is a change in position from the Government’s stance in 2024, whereby the law focused on sharing or threatening to share deepfakes, or intimate images without consent, but it was not an offence to make such images, on the basis that there would not be sufficient evidence that harm was caused to a victim, to justify criminalising making intimate images which are not shared or threatened to be shared. Under these new offences individuals creating or designing intimate images of another person using computer technology or graphics (deepfake) or individuals requesting its creation may face prosecution.
If the created image is then shared, the individual may also be charged under the sharing offence, which carries a maximum sentence of up to 2 years imprisonment. However, even if an individual creates an intimate deepfake with no intention for the image to be shared, but for its purpose to cause alarm, distress or humiliation then they will be committing a criminal offence. The defence that arises form an instance like this is one of consent. If the person depicted in the deepfake consented to their image being used, then the offence does not apply.
It is worth also noting the revision of laws on voyeurism. The Government now proposes for this law to go beyond upskirting offences and now to include two more serious offences of taking intimate images and installing of equipment to record/and or recording intimate images.
The Government wished to incorporate these offences into the Criminal Justice Bill, however with the public view on the increasing harm caused by images, these new offences were expedited into legislation and can be found at s.135 of the Data (Use and Access) Bill. This law remains in the pipeline, however given the aggressiveness in which the Government seeks to protect online users and address the wide-ranging implications of deepfakes it is likely that it will be passed, given that attempts to limit the consequences of these new proposed offences, such as removal of the possibility of a custodial sentence or proposing the defence of reasonable justification, have been rejected.
It is not just individuals who can fall foul of these new offences. Platforms, service providers and companies also have an obligation under the Online Safety Act 2003 to remove prohibited content or risk enforcement. Platforms may still struggle with detection/takedown of deepfakes, especially when they are globally distributed. However intimate and nude images are removed more quickly from platforms than from person to person sharing on devices. This is in part due to copyright laws, and the police will employ a multitude of resources to detect an individual sharing such images.
There are still gaps surrounding legislation for AI. To prove that a particular image is a deepfake, showing an individual’s intention, proving harm caused and tracing the author all remain complex issues. This can be seen when we look at the s.179 Online Safety Act (see above), this section requires that an individual has knowledge that an image was false, and the individual has the intention to cause harm.
The very fact that the law is adapting sends a message: misuse of AI in this domain is not a technological novelty to ignore. For individuals, businesses and content creators it underlines the need for caution: generating or using AI-images of others may carry legal risk, even if you think “it’s just a joke”, or legitimately using AI for satire, art or seduction. The courts will need to balance the protection of individuals with freedom of expression under Article 10 of the European Convention on Human Rights.
Particularly as an individual it can often be the case that you do not realise an image is AI generated, and you have shared the image content without malicious intent. We often see cases where a person forwards or reposts a manipulated image believing it’s real. The context of any alleged conduct will be vital.
With anything there are growing pains, the difficulty remains that there will still be ongoing rapid technological change. As AI tools become more accessible, the boundary between editing and deepfake becomes blurrier, and the law takes time to keep up. Theoretically the use of broad definitions such a “digital technology” and “computer graphics” will give the Government time to bring changes to existing laws and implement new ones.
The law for AI-generated images and deception is evolving. However, with any emerging offence, the early cases will test how far the law should go in punishing conduct that may be careless, reckless, or simply misunderstood. AI now makes it easier to manipulate reality, however there are still concerns about the expansion of criminal liability into the realm of creation and subsequently evidential fairness.
How can Hodge Jones & Allen criminal defence solicitors help?
Being arrested or interviewed for a deepfake image offence can be overwhelming, particularly if you have never been arrested before. The law surrounding AI generated images has changed in 2025, and many people do not realise what counts as a criminal offence.
If you are contacted by the police or believe you may be under investigation for creating, sharing, or being linked to AI-generated or deepfake content, seek legal advice immediately. Our Criminal Defence Team have extensive experience in emerging digital offences and can provide clear, confidential guidance. Get in touch with us today to discuss your situation with one of our solicitors. Call 0330 822 3451 or request a callback.