Several minors in the Spanish region of Tierra de Barros could face prison time after they allegedly used artificial intelligence (AI) to create pornographic images of underage girls and released the fake photographs on pornographic websites, including the subscription site OnlyFans.
The Spanish National Police say they have already identified several minors involved in the creation and spread of the images, which first appeared on a WhatsApp group that contained students from various schools in the town of Almendralejo, El Mundo reports.
The case came to the attention of police after teacher and gynaecologist Miriam Al Adib made a video to her thousands of social media followers claiming to have discovered a picture of her 14-year-old daughter’s face superimposed onto a naked body.
“The montages are super-realistic, it is very worrying and a real atrocity,” she said and added, “One of my daughters, with tremendous disgust, said to me, ‘Look what they’ve done to me.'”
Al Adib also called on other girls who may be victims to come forward and report their stories to the police saying:
You are not to blame for what has happened, we know that there are girls who do not dare to tell their parents because they might tell them that this happened to them because they have social media and uploaded things to the internet, but they will help you, you do not have to be silent or ashamed, you are the victims.
Another mother involved in the case, Fátima Gómez, claimed that her daughter had not only been victimised but attempts were made to blackmail her with the fake AI-generated images.
“She has shown me a conversation with one of the boys where he tells her to give him some money. When my daughter says no, then the boy directly sends her the photo of the photo montage,” she said.
At least 30 girls are said to have been victims of the alleged perpetrators and so far police have received 11 complaints.
Almendralejo Mayor José María Ramírez commented on the case saying,
It may have started as a joke or hooliganism, but the significance is much greater and can have serious consequences for those who have prepared these photographs, as well as for those who disseminate them.
Parent Pedro García echoed the mayor’s comments saying, “It’s gotten out of hand. It probably started with a joke from the friends of the gang to make themselves funny, a little girl, but it has overflowed.”
“They would not know what they were doing, but it is still a crime even if it comes from ignorance; You have to make them understand that it is not nonsense and I hope they have become aware,” Miriam Al Adib said.
“The vast majority have done it without bad intention, but that does not mean that the pain of the minors has diminished one iota,” she added.
Many countries have been dealing with cases of AI-generated child abuse in recent months as AI programmes have become more accessible to the general public.
In June, the BBC noted that AI-generated child sex abuse material was being sold on mainstream websites, including the website Patreon and that makers of the material were using a software called Stable Diffusion to create the images.
The UK’s cyber-security agency GCHQ told the BBC,
Child sexual abuse offenders adopt all technologies and some believe the future of child sexual abuse material lies in AI-generated content.
Some countries are directly attempting to tackle the spread of AI-generated abuse images, such as Australia, which has drafted a new code to require all search engines to make sure that such material does not show up in user searches.
Dan Sexton, chief technology officer at the Internet Watch Foundation (IWF), spoke to the Guardian newspaper earlier this month and claimed paedophiles were actively discussing how to create AI-generated abuse material on the dark web:
There’s a technical community within the offender space, particularly dark web forums, where they are discussing this technology. They are sharing imagery, they’re sharing [AI] models. They’re sharing guides and tips.
The IWF searches for abuse material in order to remove it from the internet but Sexton warned that the group and others could become overwhelmed due to AI-generated imagery. Law enforcement experts, meanwhile, say that the fake but realistic images could also make it harder to identify real victims of abuse.