The Web Watch Basis (IWF) says its analysts have found “legal imagery” of ladies aged between 11 and 13 which “seems to have been created” utilizing Grok.
The AI device is owned by Elon Musk’s agency xAI. It may be accessed both by way of its web site and app, or by way of the social media platform X.
The IWF stated it discovered “sexualised and topless imagery of ladies” on a “darkish net discussion board” wherein customers claimed they used Grok to create the imagery.
The BBC has approached X and xAI for remark.
The IWF’s Ngaire Alexander informed the BBC instruments like Grok now risked “bringing sexual AI imagery of youngsters into the mainstream”.
He stated the fabric could be categorized as Class C below UK regulation – the bottom severity of legal materials.
However he stated the person who uploaded it had then used a special AI device, not made by xAI, to create a Class A picture – essentially the most severe class.
“We’re extraordinarily involved concerning the ease and pace with which individuals can apparently generate photo-realistic baby sexual abuse materials (CSAM),” he stated.
The charity, which aims to remove child sexual abuse material from the web, operates a hotline the place suspected CSAM could be reported, and employs analysts who assess the legality and severity of that materials.
Its analysts discovered the fabric by on the darkish net – the photographs weren’t discovered on the social media platform X.
X and xAI have been beforehand contacted by Ofcom, following reviews Grok can be utilized to make “sexualised pictures of youngsters” and undress ladies.
The BBC has seen a number of examples on the social media platform X of individuals asking the chatbot to change actual pictures to make ladies seem in bikinis with out their consent, in addition to placing them in sexual conditions.
The IWF stated it had acquired reviews of such pictures on X, nevertheless these had not thus far been assessed to have met the authorized definition of CSAM.
In a earlier assertion, X stated: “We take motion towards unlawful content material on X, together with CSAM, by eradicating it, completely suspending accounts, and dealing with native governments and regulation enforcement as essential.
“Anybody utilizing or prompting Grok to make unlawful content material will undergo the identical penalties as in the event that they add unlawful content material.”
