If Grok was used to create sexualized AI “deepfake” images of you without your consent, our firm wants to hear your story.

We are investigating claims on behalf of individuals harmed by nonconsensual intimate images generated by Grok and spread across Grok, X, and other platforms. In late December 2025 and early January 2026, Grok reportedly became a mass-production tool for nonconsensual sexually explicit images of real people. Users allegedly exploited Grok’s image-editing features to “digitally undress” individuals, alter their bodies, place them in degrading sexualized poses, and create explicit fake images—all without consent. These images then spread across social media, especially X, causing humiliation, emotional distress, reputational damage, and serious invasions of privacy.
The scale of the reported abuse is staggering. Researchers at the Center for Countering Digital Hate found that Grok produced an estimated 3 million sexualized images, including 23,000 of children, over an 11 day period. That’s 190 sexualized, AI-generated images per minute. Victims reportedly included celebrities, public figures, and private individuals, many of whom had no idea that fake sexualized images of them were being created and circulated online. No one should be exploited, exposed, or humiliated this way.
If you were targeted, you should not have to bear the consequences alone. Our firm is fighting to hold companies accountable for enabling this abuse and to pursue compensation for victims harmed by nonconsensual AI-generated intimate images. We can work to ensure your privacy along the way, as claims of this nature often proceed under a pseudonym. If you or someone you know has been affected, contact us today for a free, confidential case evaluation by filling out our form or calling 415-236-2305.