A Tennessee mother says her daughter’s senior year of high school turned into a nightmare. She found out — through an anonymous Instagram message her daughter received — that someone had taken real photographs from her daughter’s school yearbook and homecoming dance and used Elon Musk’s AI tool Grok to strip her clothes and generate sexually explicit images and videos of her. The police, when the family contacted them, named the company responsible: xAI. Most parents had never heard of it.
Moreover, on March 16, three teenagers — identified in court documents only as Jane Does 1, 2, and 3 — filed a class action lawsuit in a federal California court against xAI, the artificial intelligence company that makes Grok. The lawsuit alleges that xAI knowingly participated in the production and distribution of child sexual abuse material and failed to protect children from being targeted. Furthermore, this is not an isolated incident — it is the most recent and most legally significant event in a crisis that has been building since late 2025, when Grok generated millions of sexually explicit images in less than two weeks, including more than 20,000 depicting children. As a result, the case has become the most important test yet of whether AI companies can be held legally liable when their tools are used to harm children.
What Happened: The Full Story
The Arrest That Started It All
The chain of events leading to the lawsuit began in December 2025, when police in eastern Tennessee arrested a man who had allegedly compiled images and videos of at least 18 minor girls. According to the lawsuit, the individual took real photographs of girls — from social media posts, school yearbooks, and homecoming dance photos — and used Grok’s image generation tools to remove their clothing and create sexually explicit content. Moreover, in one documented instance cited in the lawsuit, the man used Grok to strip a blue bikini from a photograph a girl had posted to her Instagram account, producing an image depicting her without any clothing. Furthermore, the images were not kept private — they were traded on the messaging platform Telegram and on the file-sharing platform Mega, where an investigation discovered hundreds of AI-generated and altered sexual abuse images of minors circulating among predators. As a result, multiple families were informed by police that child sexual abuse material had been created of their daughters only after the man’s arrest.
The Lawsuit Filed March 16
The class action lawsuit, filed on March 16 in federal court in California, was brought by three teenagers through law firms Lieff Cabraser Heimann and Bernstein and Baehr-Jones Law. The suit names xAI as the defendant and accuses the company of three things: deliberately licensing its technology to app makers — often outside the United States — while knowing the technology could be used to generate child sexual abuse material; failing to incorporate industry-standard guardrails that other AI companies use to prevent the generation of sexual content; and failing to report individuals who create or distribute child sexual abuse material to the appropriate authorities. Moreover, the lawsuit argues that xAI “knowingly participated” in the production and distribution of child sexual abuse material rather than acting as a passive platform unaware of the harm being done. Furthermore, attorneys for the plaintiffs stated the perpetrator in the Tennessee case relied not on Grok directly but on an unnamed third-party app that used xAI’s algorithm — suggesting xAI outsourced liability while still profiting from the technology. As a result, the legal theory at the heart of the case is that xAI bears direct responsibility for harms enabled by technology it built and licensed, regardless of who operated the tool at the moment the images were generated.
What Is Grok and What Is “Spicy Mode”?
Grok is an AI chatbot developed by xAI, Elon Musk’s artificial intelligence company, and hosted on his social media platform X. It was launched in 2023. In late 2025, xAI released a feature called Grok Imagine — also internally referred to as “spicy mode” — that allowed users to generate images with fewer content restrictions than competing AI tools. Moreover, the feature allowed users to prompt Grok to alter images of real people, including stripping clothing to create nude or near-nude depictions. Furthermore, the lawsuit notes that Musk himself had publicly endorsed this approach, posting on X in August 2025 that “VHS won in the end, in part because they allowed spicy mode” — a statement his own lawsuit now uses against him. As a result, spicy mode transformed Grok from a text and image generation tool into what critics and courts are now characterising as an automated tool for generating nonconsensual intimate images at scale.
The Scale: One Nonconsensual Image Per Minute
The scale of the harm Grok’s image tools caused has been documented by multiple independent organisations. A December 2025 review by the content analysis firm Copyleaks found that Grok was generating approximately one nonconsensual sexualised image per minute — each of which was posted directly to X for public consumption. Moreover, the Center for Countering Digital Hate conducted a sampling of the images and found that in less than two weeks after the spicy mode launch, Grok had created millions of sexualised images, including more than 20,000 depicting children. Furthermore, the images circulated by the chatbot included sexually explicit deepfakes of high-profile individuals — from Taylor Swift to ordinary social media users — as well as minors whose photographs had been scraped from public accounts. As a result, the volume and accessibility of the harm caused by Grok’s image tools far exceeded anything seen from competing AI platforms.
How Grok Differs From Other AI Image Tools
| Feature | Grok (xAI) | Google Imagen | OpenAI DALL-E | Significance |
| Digital watermarks on generated images | No — not adopted as of filing date | Yes — discloses AI origin | Yes — discloses AI origin | Without watermarks, images harder to identify as AI-generated |
| Industry-standard content guardrails | No — lawsuit alleges deliberately omitted | Yes — restrict sexual content generation | Yes — restrict sexual content generation | Guardrails are the primary safety mechanism against misuse |
| Reporting of CSAM creators to authorities | No — lawsuit alleges failure to report | Required by law — platforms must report | Required by law — platforms must report | Failure to report is a federal legal obligation |
| “Spicy mode” / explicit image generation | Yes — launched late 2025 | No equivalent feature | No equivalent feature | Explicitly designed to generate more sexual content |
| Watermark or consent verification for real person images | No | Partial | Partial restrictions | Critical gap enabling nonconsensual image generation |
| Response to CSAM discovery | Musk denied, then restricted bikini images only | Immediate removal protocols | Immediate removal protocols | Incomplete and delayed response per lawsuit |
Musk’s Response: Denial, Then Partial Restriction
Elon Musk’s public response to the crisis has been a pattern of initial denial followed by limited, partial action. When Grok’s sexualised image generation first attracted major attention, Musk posted on X in January 2026: “Not aware of any naked underage images generated by Grok. Literally zero.” Moreover, the lawsuit directly contradicts this claim — citing the Copyleaks analysis, the Center for Countering Digital Hate sampling, and the December 2025 arrest in Tennessee as documented evidence that such images were generated in large numbers. Furthermore, in January 2026, following the public outcry, Musk announced that Grok would no longer generate images of girls in bikinis — a partial restriction that critics noted addressed the symptom rather than the underlying capability. As a result, the lawsuit argues that Musk and xAI “saw a business opportunity” in Grok’s image generation capabilities and “publicly released it anyway” despite knowing the type of harmful content that could be produced.
The Legal Framework: What Charges Are Being Brought
| Legal Claim | What It Alleges | Significance |
| Production of CSAM with intent to distribute | xAI’s tools produced and enabled distribution of child sexual abuse material | Federal crime — carries severe penalties if proven against a corporation |
| Possession of CSAM | xAI possessed child sexual abuse material generated by its own tools | Second federal charge — establishes direct corporate liability |
| Deliberate licensing to harmful third parties | xAI licensed image tools to app makers outside the US knowing misuse was likely | “Outsourcing liability” argument — pierces the third-party veil |
| Failure to implement safety guardrails | xAI did not adopt industry-standard content restrictions used by Google and OpenAI | Negligence standard — establishes duty of care to users and third parties |
| Failure to report CSAM creators | xAI did not report individuals who generated CSAM to law enforcement | Violation of federal reporting obligations under PROTECT Act |
| Emotional distress damages | Victims suffered psychological harm, loss of dignity, privacy, and personal safety | Civil damages claim — establishes ongoing and permanent harm |
Attorneys Vanessa Baehr-Jones and Annika Martin of Lieff Cabraser stated the objective plainly. Baehr-Jones told the Washington Post: “These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators.” Moreover, Martin added: “xAI and its founder Elon Musk deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it.” Furthermore, the plaintiffs are seeking unspecified damages for emotional distress and harm, as well as an immediate court injunction barring Grok from generating sexualised content involving minors. As a result, the legal action seeks both financial accountability and structural change in how Grok operates.
The Broader Crisis: California AG and Previous Investigations
The March 16 lawsuit did not emerge in isolation — it is the latest development in a crisis that began attracting official attention in January 2026. California Attorney General Rob Bonta announced on January 14 that the state’s Department of Justice would open an investigation into xAI. Moreover, Bonta stated: “The avalanche of reports detailing the nonconsensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further. We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material.” Furthermore, this is also the first lawsuit in which underage victims of xAI’s image tools have taken direct legal action — previous actions involved adult women, including influencer Ashley St. Clair, who has a child with Musk, and who sued the company for AI-produced images depicting her nude as a teenager. As a result, the legal pressure on xAI is escalating from every direction simultaneously — state investigation, federal lawsuit, prior civil suits, and international scrutiny.
The Deepfake Crisis: How Big Is the Problem?
The xAI lawsuit sits within a broader and rapidly expanding crisis around AI-generated nonconsensual intimate images — commonly called deepfakes. The scale of the problem is not marginal.
- 90% of all deepfakes are nonconsensual porn: Context News reported in 2024 that the vast majority of deepfake content — approximately 90% — is nonconsensually generated pornography of women and girls. This figure has only grown as AI image generation tools have become more accessible.
- Production doubling every six months: Tech experts cited in the case confirmed that deepfake production is doubling every six months, driven primarily by the widespread availability of AI image generation tools. The exponential growth curve means the problem will be dramatically worse in two years than it is today.
- Children are primary victims: Despite most media coverage focusing on electoral deepfakes or high-profile adult victims like Taylor Swift, children are among the primary victims of nonconsensual AI image generation — a fact the xAI lawsuit places at the centre of its argument.
- Little recourse exists: NPR confirmed that there is currently very little recourse available for people whose images and likenesses are stolen to create nonconsensual AI images — a gap the plaintiffs’ attorney described as the core reason for the lawsuit.
What Parents Need to Do Right Now
| Action | Why It Matters | How to Do It |
| Audit your child’s public social media | Photos scraped from public accounts are primary source material for deepfake generation | Review Instagram, TikTok, Snapchat, X privacy settings — set to private immediately |
| Talk to your child about this case | Awareness is the first line of defence — children need to know this threat exists | Use age-appropriate language — focus on what to do if they discover an image, not fear |
| Set up Google alerts for your child’s full name | Provides early warning if images appear in indexed public locations | Go to alerts.google.com — add name + school name combinations |
| Teach children to never share location-tagged photos | Location data in images helps predators identify and target individuals | Check photo settings on all devices — disable GPS tagging in camera apps |
| Know what to do if an image is found | Speed of response matters — images spread quickly once online | Contact platform immediately — file NCMEC report at CyberTipline.org — contact police |
| Verify school yearbook and event photo policies | Schools routinely post photos publicly — many are unaware of the risk | Ask school administration about photo publication policies and opt-out options |
| Monitor for anonymous messages on all platforms | Jane Doe 1 found out through an anonymous Instagram message | Ensure you have visibility into messages your child receives from unknown accounts |
| Understand that private accounts are not fully safe | Photos shared with “friends” can still be captured and misused | Discuss with children who their followers actually are — regular follower audits |
What the Law Says — and Where It Falls Short
United States federal law requires online platforms to report child sexual abuse material to the National Center for Missing and Exploited Children. Moreover, the PROTECT Act criminalises the production, possession, and distribution of child sexual abuse material — including AI-generated depictions of real minors. Furthermore, the lawsuit alleges xAI violated both the reporting obligation and the production prohibition. However, the legal framework has significant gaps: Section 230 of the Communications Decency Act historically shielded platforms from liability for user-generated content — though courts are increasingly scrutinising whether this protection applies when a company’s own tools generate the harmful content rather than users independently doing so. As a result, the xAI case may become a landmark test of whether Section 230 protects AI companies when their own models produce the harmful material.
Conclusion
The lawsuit filed on March 16 by three teenagers against xAI is more than a legal action against one company. It is a reckoning with a question the AI industry has been avoiding: who is responsible when a company builds a tool that it knows can be used to harm children, markets that capability as a feature, and then claims no liability when children are harmed? The complaint’s answer is direct — xAI and its founder “saw a business opportunity” and “publicly released it anyway.”
Moreover, for parents, the lesson of this case is practical and urgent. Every public photograph your child has ever posted to Instagram, TikTok, a school website, or a local news story is available to anyone with access to an AI image generation tool. Furthermore, those tools are increasingly accessible, increasingly capable, and — as this case demonstrates — not consistently governed by the safety standards that would prevent misuse. As a result, the actions parents take today to reduce their children’s public digital footprint are not overprotection. They are the most practical response available to a threat that the law and the technology industry are still catching up to.
Frequently Asked Questions (FAQs)
Q1: What exactly is xAI being sued for in the Grok lawsuit?
Three teenagers filed a class action lawsuit on March 16 alleging that xAI knowingly produced and distributed child sexual abuse material through its Grok image generation tools. The lawsuit accuses xAI of deliberately omitting industry-standard safety guardrails, licensing its technology to third-party apps outside the US while knowing the technology could be misused, and failing to report individuals who used Grok to generate child sexual abuse material to law enforcement. Moreover, the suit alleges xAI and Elon Musk “saw a business opportunity” and released Grok’s image generation capabilities knowing children could be harmed. As a result, the legal claims span federal criminal law, civil damages, and structural injunctive relief.
Q2: Did Grok actually create the images directly, or did a third-party app do it?
According to the lawsuit, the perpetrator in the Tennessee case used an unnamed third-party app that ran on xAI’s algorithm — not Grok directly. However, the lawsuit argues this distinction does not absolve xAI of responsibility. Moreover, the complaint argues that xAI “deliberately licensed its technology to app makers, often outside the US” in a way that allowed it to profit from the capability while attempting to distance itself from direct liability. Furthermore, the plaintiffs’ attorney stated this represented an attempt to “outsource the liability of their incredibly dangerous tool.” As a result, the core legal question is whether a company bears responsibility for harms enabled by technology it built and licensed, regardless of who operated it at the moment of harm.
Q3: How is Grok different from ChatGPT or Google’s AI image tools?
Grok does not use digital watermarks that identify images as AI-generated — a standard adopted by both Google and OpenAI. Moreover, the lawsuit alleges Grok did not incorporate industry-standard content guardrails that restrict sexual content generation — safeguards that Google and OpenAI have implemented. Furthermore, xAI actively launched a “spicy mode” — Grok Imagine — specifically designed to generate more sexual content with fewer restrictions, a feature with no equivalent at Google or OpenAI. As a result, the specific design choices xAI made with Grok placed it in a category distinct from its competitors — a distinction that sits at the heart of the legal argument.
Q4: What should a parent do if their child discovers a fake nude image of themselves?
Act immediately. Report the image to the platform where it was found — most platforms have emergency removal processes for child sexual abuse material. Moreover, file a report with the National Center for Missing and Exploited Children at CyberTipline.org — this triggers legal obligations for platforms to remove the material and notify law enforcement. Furthermore, contact local police and preserve all evidence — screenshots, URLs, usernames — before reporting, as platforms sometimes remove content before evidence can be documented. Seek immediate mental health support for your child — the psychological impact of this kind of violation is severe and documented. As a result, the speed and comprehensiveness of the response significantly affects both the extent of harm and the availability of legal remedies.
Q5: Can AI-generated nude images of real teenagers be prosecuted as child pornography?
Yes — under US federal law. The PROTECT Act covers AI-generated depictions of real minors, not just photographs. Moreover, the March 16 lawsuit explicitly charges xAI with production, possession, and distribution of child sexual abuse material — applying the same legal framework to AI-generated images that applies to conventional child pornography. Furthermore, the man arrested in Tennessee in December 2025 — whose actions led directly to the lawsuit — was charged under these provisions. As a result, AI-generated nonconsensual nude images of real identifiable minors are not a legal grey area in the United States — they are federal crimes, both for the person who generates them and potentially for the company whose tools made generation possible.


