1.6 C
New York
Thursday, December 4, 2025
HomeEntertainmentTaylor Swift deepfakes generated by Elon Musk’s Grok AI raise legal and...

Taylor Swift deepfakes generated by Elon Musk’s Grok AI raise legal and safety concerns

Date:

Related stories

Indian-origin motel manager Rakesh Patel shot dead in Pennsylvania during couple’s quarrel

Highlights: Rakesh Patel, a 50-year-old Indian-origin motel manager from...

Rashmika Mandanna and Vijay Deverakonda confirm private engagement, plan 2026 wedding

Highlights: Rashmika Mandanna and Vijay Deverakonda confirm their engagement...

‘Peaky Blinders’ to return with two new series set in 1950s Birmingham

Highlights: Two new Peaky Blinders series are confirmed by...

Highlights:

  • Grok Imagine’s “spicy mode” generated Taylor Swift deepfakes without any nudity request, raising legal and ethical concerns.

  • Tests also produced explicit content of other female celebrities, while similar male prompts showed limited sexualisation.

  • Experts label the issue “misogyny by design” and urge the UK government to fast-track laws criminalising non-consensual pornographic deepfakes.

Elon Musk’s AI tool, Grok Imagine, is under criticism following reports that its “spicy mode” created sexually explicit deepfake videos of Taylor Swift without being prompted for nudity. The AI video generator, developed by Musk’s company xAI, is alleged to have breached ethical safeguards and potentially violated the UK’s recently introduced online safety laws.

- Advertisement -

The controversy began when The Verge tested Grok’s paid “spicy” feature, which converts still images into videos. A prompt for “Taylor Swift celebrating Coachella with the boys” allegedly resulted in clips showing the singer removing her clothing and dancing in a thong.

Jess Weatherbed, the journalist who tested the tool, told BBC News: “It was shocking how fast I was met with it. I never told it to remove her clothing, all I did was select ‘spicy’.”

How Grok Imagine’s ‘Spicy Mode’ Works

Grok Imagine, launched for Apple users this week, provides text-to-image generation and video conversion through four presets — “normal,” “fun,” “custom” and “spicy.” The feature is available via a £23 (₹2,400) SuperGrok subscription. While marketed as a creative tool, it is now accused of enabling non-consensual pornography, including Taylor Swift deepfakes.

Tests by Gizmodo found similar results when prompts involved celebrities such as Scarlett Johansson, Sydney Sweeney, Jenna Ortega, Nicole Kidman, Kristen Bell and Timothée Chalamet. In some cases, the AI displayed a “video moderated” message, but in others it proceeded without restrictions.

Gender bias concerns have also been raised. Testers reported that attempts to generate explicit male content stopped at shirt removal, while female celebrities were sexualised more extensively.

Embed from Getty Images

Legal Experts Call Taylor Swift Deepfakes ‘Misogyny by Design’

Clare McGlynn, a law professor at Durham University and expert on online abuse, described the system as “misogyny by design.” She noted that platforms like X, which integrates Grok Imagine, could have prevented this outcome.

She highlighted that the company’s acceptable use policy bans depictions of individuals “in a pornographic manner,” yet the AI still defaulted to sexualising women without instruction. McGlynn has worked on drafting amendments to UK law to criminalise the creation or request of non-consensual pornographic deepfakes in all circumstances.

Currently, UK law bans such content only if it involves revenge porn or minors. The government has committed to adopting the amendment but has not yet implemented it.

Baroness Owen, who introduced the amendment in the House of Lords, said: “Every woman should have the right to choose who owns intimate images of her. This case shows why the government must not delay any further.”

Taylor Swift Deepfakes Raise Age Verification Concerns

Under UK legislation introduced in July, platforms hosting explicit material must implement robust age verification measures. According to Weatherbed, Grok Imagine only asked for her date of birth before enabling “spicy mode” and did not request any form of ID.

Ofcom, the UK’s media regulator, confirmed that AI systems capable of producing pornographic content are covered by the law. The regulator stated it is monitoring platforms to ensure that safeguards are in place, particularly to protect minors from access to content such as Taylor Swift deepfakes.

Background: Previous Taylor Swift Deepfake Incidents

In January 2024, explicit Taylor Swift deepfakes went viral on X and Telegram, prompting X to temporarily block searches for her name. That incident was seen as a test of the platform’s ability to prevent non-consensual pornography. The current controversy has revived questions about whether these safeguards are effective.

 

View this post on Instagram

 

A post shared by Taylor Swift (@taylorswift)

Musk’s AI Tools Under Wider Scrutiny

The backlash over Taylor Swift deepfakes generated by Grok Imagine adds to wider concerns about Musk’s AI operations. In July, one of the company’s chatbots faced criticism for praising Adolf Hitler and making antisemitic remarks, leading to condemnation from the Anti-Defamation League. Musk later said the model had been “significantly improved.”

Campaigners are now calling for:

  • Stronger content moderation and filtering before public release
  • Faster implementation of relevant legislation
  • Independent audits of AI systems to assess risk before launch

Taylor Swift’s representatives have been contacted for comment. xAI has not yet issued a public statement.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories