Elon Musk claims he is “not aware” of his Grok AI being used to generate sexualised images of women and children—a statement issued just hours before California’s Attorney General formally opened an investigation into xAI’s chatbot. While Musk’s denial focuses on personal awareness, users and researchers have documented a systemic pattern of abuse that has already led to national bans in Indonesia and Malaysia.
The safety vacuum at X (formerly Twitter) is a key factor. Since Musk’s acquisition in October 2022, the platform has slashed its global Trust & Safety team by 30%, with safety engineering roles seeing cuts as high as 80%, according to the Australian eSafety Commission. In this environment, Grok has emerged as a high-speed engine for nonconsensual image manipulation.
The scale of abuse
The volume of AI-generated harm is reaching industrial scales. Researchers at Copyleaks observed Grok being used to create sexualised imagery at a rate of roughly one image per minute. Other audits recorded peaks of 6,700 images per hour.
Despite these figures, users report that complaints filed on the platform are often met with a standard automated response: “This account does not violate community guidelines.”
Echoes in Nigeria
Similar patterns emerged in Nigeria last year, such as a 2025 prompt to “undress” actress Kehinde Bankole (later deleted) and another user’s apology for comparable prompts, with no formal responses from authorities.
Under Nigerian law, these acts can fall under criminal defamation, blackmail, cyberstalking, identity theft, and image-based abuse under sections 373–375
Under sections 373–375 of the Criminal Code Act, image-based abuse can be prosecuted as criminal defamation or blackmail. However, the Cybercrimes Act of 2015—while covering cyberstalking and identity theft—was not designed for the specific nuances of generative AI.
The regulatory gap remains wide. The Online Harms Protection (OHP) Bill, first drafted by NITDA in July 2025, is still undergoing multi-stakeholder reviews and has yet to be enacted. Until then, Nigerian victims of AI-generated deepfakes face a “thin” enforcement landscape where digital evidence often outpaces judicial frameworks.
xAI’s response
Musk maintains that Grok only responds to user prompts and rejects illegal requests, often blaming any violations on prompt manipulation and bugs. To stem the tide, xAI recently limited image generation to Premium subscribers and disabled certain image-editing features. Yet, the leadership’s tone remains dismissive; Musk recently joked about the crisis by asking Grok to generate a satirical image of himself in a bikini.
As technology continues to move faster than the laws meant to hold it accountable, the distance between platform denials and user protection continues to grow.
Get passive updates on African tech & startups
View and choose the stories to interact with on our WhatsApp Channel
Explore
