
In short
- Australia’s eSafety Commissioner has flagged a spike in complaints about Elon Musk’s Grok chatbot creating non-consensual sexual images, with reports doubling since late 2025.
- Some complaints concern material that may contain child sexual exploitation, while others concern adults who are victims of image abuse.
- The concerns come as governments around the world investigate Grok’s lax content moderation, with the EU declaring the chatbot’s ‘Spicy Mode’ illegal.
Australia’s independent online safety regulator warned Thursday about the increasing use of Grok to generate sexualized images without consent, revealing that its office has doubled complaints about the AI chatbot in recent months.
The country’s eSafety Commissioner, Julie Inman Grant, said some reports concern possible child sexual exploitation material, while others concern adults being exposed to image-based abuse.
“I am deeply concerned about the increasing use of generative AI to sexualize or exploit people, especially when children are involved,” says Grant. posted on LinkedIn on Thursday.
The comments come during editing international reaction against Groka chatbot built by billionaire Elon Musk’s AI startup xAI, which can be asked directly on X to change users’ photos.
Grant warned that AI’s ability to generate “hyper-realistic content” makes it easier for bad actors to create synthetic abuse and harder for regulators, law enforcement and child safety groups to respond.
Unlike competitors like ChatGPT, Musk’s xAI has positioned Grok as an “edgy” alternative that generates content that other AI models refuse to produce. Last August, it launched “Spicy Mode,” specifically to create explicit content.
Grant warned that Australia’s enforceable industry codes require online services to implement safeguards against child sexual exploitation material, AI-generated or otherwise.
Last year, eSafety took enforcement action against widely used ‘nudify’ services, forcing their withdrawal from Australia, she added.
“We have now entered an era where companies must ensure that generative AI products have appropriate safeguards and guardrails built in at every stage of the product lifecycle,” Grant said, noting that eSafety “will investigate and take appropriate action” using the full range of regulatory tools.
Deepfakes on the rise
In September, Grant was handed Australia’s first deepfake penalty when the federal court fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of prominent Australian women.
The eSafety Commissioner took Rotondo to court in 2023 after he defied the removal orders, saying they “meant nothing to him” as he was not a resident of Australia. He then emailed the images to 50 addresses, including Grant’s office and media outlets, according to one ABC News report.
Australian lawmakers are pushing for stronger protections against non-consensual deepfakes that go beyond existing laws.
Independent Senator David Pocock introduced the Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 in November, which would see those who share non-consensual deepfakes subject to an upfront fine of $102,000 (A$165,000), with companies facing fines of up to $510,000 (A$825,000) for failing to comply with takedown notices.
“We now live in a world where more and more anyone can create a deepfake and use it however they want,” Pocock said in an interview. statementcriticizing the government for being “asleep at the wheel” on AI protection.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

