
Elon Musk’s Grok AI chatbot has triggered global outrage after reports alleged it generated sexualized images of women and minors, prompting urgent calls from governments and advocacy groups for tighter regulation and accountability.
What Are Grok AI Deepfakes?
Elon Musk’s free AI chatbot, Grok, has sparked controversy after being accused of generating sexually explicit and degrading content involving women. The tool has drawn sharp criticism from regulators and governments in Europe, India and Malaysia.
In India, authorities have ordered social media platform X to immediately remove all vulgar, obscene and unlawful material, particularly content produced by Grok, warning that failure to comply could invite legal action.
The AI chatbot received an upgrade last month with the rollout of “Grok Imagine,” a tool that generates images and videos from text prompts and features a “spicy mode” that enables adult content.
Critics say the update has triggered a sharp rise in user-generated sexualized deepfakes, intensifying concerns over misuse and platform safeguards.
Trigger for Government Action
The Ministry of Electronics and Information Technology (MeitY) on Tuesday issued a strong notice to X Corp for failing to meet statutory due-diligence requirements under the Information Technology Act, 2000, and the IT Rules, 2021.
The ministry raised serious concerns over reports that X’s AI tool, Grok, has been misused to generate and circulate obscene, indecent and sexually explicit content targeting women. MeitY directed the platform to conduct a thorough review of Grok’s technical safeguards and governance mechanisms to prevent the creation of unlawful content and has sought a detailed action-taken report from X.
Legal and Regulatory Implications

The controversy is accelerating a regulatory shift, with governments moving beyond broad guidelines toward clear, enforceable laws aimed at curbing AI-related harms.
Regulators are increasingly calling for mandatory safety safeguards to be built into AI models at the training stage, rather than relying on post-publication moderation. Several countries are also updating existing laws or drafting new legislation to specifically criminalise the creation and distribution of non-consensual deepfakes, while the global backlash spanning India, the EU, the UK and Malaysia has underscored the need for coordinated international standards to regulate AI across borders.
Impact on Users and Public Trust
The alleged misuse of Grok, the AI chatbot embedded within social media platform X, to create non-consensual deepfake images has triggered widespread outrage, eroded user trust and raised serious concerns about online safety and privacy.
Users, particularly women and minors, have reported being targeted with sexually explicit or manipulated images generated through Grok’s image tools, often referred to as “Grok Imagine” or its so-called “spicy mode”. Victims say the content has caused significant psychological distress and a sense of personal violation. The ease with which publicly shared photos can be altered without consent has intensified fears about weak safety safeguards, prompting users to question X’s enforcement of its stated zero-tolerance policy on non-consensual sexual content.
Privacy concerns have also deepened following reports that private interactions with Grok were indexed by search engines, undermining expectations of confidentiality. While some affected users have responded by deleting their accounts, the broader impact on platform activity remains mixed, with reports indicating a rise in Grok app downloads even as criticism mounts.
What Happens Next
The controversy underscores the growing risks posed by powerful generative AI tools when safeguards fall short. As governments push for stricter oversight, platforms like X face increasing pressure to balance innovation with responsibility. The outcome could shape how AI is regulated across social media worldwide.
For more such informative articles stay tuned at The World Times.