McGlynn warned some of the deepfakes gaining attention might not actually fall under new laws – saying definitions of intimate covered underwear and nudity but not bikinis.
Ofcom, the UK’s communications regulator, said it had “made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK”. In a statement posted to X, the regulator added that the firms’ responses would shape any further investigation.
In December, the government also unveiled its violence against women and girls strategy, containing a pledge to ban “nudify” tools – something the government says will “target the problem at its source”.
“Legislation to criminalise the sharing of non-consensual deepfake images has progressed through parliament, but we are still waiting for the law to come into effect. Meanwhile, the sharing of real intimate images without consent is already illegal, but in practice this law is not being effectively enforced, with woefully low conviction rates,” said Emma Pickering, head of technology-facilitated abuse and economic empowerment at Refuge.
“Generative AI has made it easier than ever for perpetrators to create fake images at the expense of women’s safety, and at Refuge, we see firsthand the long-term impact that all forms of intimate image abuse can have on a survivor’s mental health and wellbeing.”
Read more:
Advertising helps fund Big Issue’s mission to end poverty
Kerry Smith, CEO of the Internet Watch Foundation, said AI products should be tested rigorously before going to market to make sure they cannot be exploited.
“We urge the UK government to introduce a regulatory framework for AI that requires companies to embed safety by design principles into the development of products to prevent the creation of this harmful content,” said Smith.
X is a “key platform” for distributing deepfake abuse images, according to the Institute for Strategic Dialogue. Its research found that tools used to generate sexually explicit, abusive images had been mentioned nearly 300,000 times on the platform.
2025 saw rolling warnings that Grok could be used to. In May, the AI apologised after fulfilling requests to “remove her clothes”. In August, the tool’s “spicy” mode generated “fully uncensored topless videos” of Taylor Swift, reported The Verge.
Other AI tools such as ChatGPT and Gemini reject requests to undress people in pictures.
But Penny East, chief executive of women’s rights organisation the Fawcett Society, said inaction from tech platforms made the problem harder to solve.
Advertising helps fund Big Issue’s mission to end poverty
“This case shows how hard enforcement is when platforms fail to act. One of the most disturbing aspects of this episode is that the technology was also used on children and young girls,” said East
“Safety for women and girls must be built into new technologies from the outset. And when companies fail to do that, the platforms must be held accountable. It is absolutely possible to put in the safeguards to prevent this happening, but it quite simply isn’t a priority for tech companies who are more interested in profit than protection.”
A government spokesperson said: “Sexually explicit deepfakes created without consent are degrading and harmful. We refuse to tolerate the violence against women and girls that stains our society which is why we have introduced legislation to ban their creation as quickly as possible.”
Big Issue has approached xAI, the company behind both X and Grok, for comment.
Do you have a story to tell or opinions to share about this? Get in touch and tell us more.
Change a vendor’s life this winter.
Advertising helps fund Big Issue’s mission to end poverty
Buy from your local Big Issue vendor every week – and always take the magazine. It’s how vendors earn with dignity and how we fund our work to end poverty.
You can also support online with a vendor support kit or a magazine subscription. Thank you for standing with Big Issue vendors.