It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.
Like the comments on this post here.
https://sh.itjust.works/post/6220815
I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.
My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.
Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.
Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.
Using drugs has no inherent victim. And it is not predatory.
I could go on but im not an expert or a social worker of any kind.
Can anyone link me articles talking about this?
I’m just gonna put this out here and hope not to end up on a list:
Let’s do a thought experiment and be empathetic with the human that is behind the predators. Ultimately they are sick and they feel needs that cannot be met without doing something abhorrent. This is a pretty fucked up situation to be in. Which is no excuse to become a predator! But understanding why people act how they act is important to creating solutions.
Most theories about humans agree that sexual needs are pretty important for self realization. For the pedophile this presents two choices: become a monster or never get to self realization. We have got to accept that this dilemma is the root of the problem.
Before there was only one option of getting a somewhat middleway solution: video and image material which the consumer could rationalize as being not as bad. Note that that isn’t my opinion, I agree with the popular opinion that that is still harming children and needs to be illegal.
Now for the first time there is a chance to cut through this dilemma by introducing a third option: generated content. This is still using the existing csam as a basis. But so does every database that is used to find csam for prevention and policing. The actual pictures and videos aren’t stored in the ai model and don’t need to be stored after the model has been created. With that model more or less infinite new content can be created, that imo does harm the children significantly less directly. This is imo different from the actual csam material because noone can tell who is and isn’t in the base data.
Another benefit of this approach has to do with the reason why csam exists in the first place. AFAIK most of this material comes from situations where the child is already being abused. At some point the abuser recognises that csam can get them monetary benefits and/or access to csam of other children. This is where I will draw a comparison to addiction, because it’s kind of similar: people doing illegal stuff because they have needs they can’t fulfill otherwise. If there is a place to get the “clean” stuff, much less people would go to the shady corner dealer.
In the end I think there is an utilitarian argument to be made here. With the far removed damage that generating csam via ai still deals to the actual victims we could help people to not become predators, help predators to not repeat, and most importantly prevent or at least lessen the amount of further real csam being created.
[This comment has been deleted by an automated system]
You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):
Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.
You do have some interesting details though:
Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.
This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.
As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.
This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.
[This comment has been deleted by an automated system]
It’s not even an idea, it’s how you get CSAM out of existing models
I didn’t know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn’t know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that’s even more horrifying.
It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.
Didn’t even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.
[This comment has been deleted by an automated system]
Except there is a good bit of evidence to show that consuming porn is actively changing how we behave related to sex. By creating CSAM by AI, you create the depiction of a child that is mere object for the use of sexual gratification. That fosters a lack of empathy and an ego centric, self gratifying viewpoint. I think that can be said of all porn, honestly. The more I learn about what porn does to our brains the more problematic I see it
AI CSAM will not create new pedophiles, but it may keep existing pedophiles from encouraging a disgusting market of child exploiters.
deleted by creator
Very good comment all around, I just have a nitpick to this section:
Lastly, there’s a very troubling thing I’ve noticed the majority isn’t willing to talk about: there are so, so many people out there who are attracted to kids. Not prepubescent kids, but very few 14 to 16 year old girls will not have had men approach them with sexual comments. The United States of America voted against making child marriage illegal. The amount of “I’ll just fuck this behaviour out of her” you can find online about Greta Thunberg from even before she was an adult is disturbing; people with full name and profile pictures on Facebook will sexualise and make rape threats to a child because she said something they didn’t like. There’s a certain amount of paedophilia that just gets overlooked and ignored.
Even worse, those people aren’t included in research into paedophilia because of how “tolerated” it is. The ones that get caught and researched are the sickos who abuse tens or hundreds of children, but the people who will marry a child won’t be.
This is actually called hebephilia/ephebophilia which is in the general public treated very similarly and often subsumed under the term pedophilia. It is considered it’s own thing though. To quote Wikipedia:
Hebephilia is the strong, persistent sexual interest by adults in pubescent children who are in early adolescence, typically ages 11–14 and showing Tanner stages 2 to 3 of physical development.[1] It differs from pedophilia (the primary or exclusive sexual interest in prepubescent children), and from ephebophilia (the primary sexual interest in later adolescents, typically ages 15–18).[1][2][3] While individuals with a sexual preference for adults may have some sexual interest in pubescent-aged individuals,[2] researchers and clinical diagnoses have proposed that hebephilia is characterized by a sexual preference for pubescent rather than adult partners.[2][4]
My guess for why it is more tolerated than straight up pedophilia is that they have reached a more mature body, that shows some/most properties of a sexually developed person. So while it’s still gross and very likely detrimental to the child if pursued (depends on the age in question, 16-18 is pretty close to adulthood), there seems to be more of an understanding for it.
[This comment has been deleted by an automated system]
The way I see it, and I’m pretty sure this will get downvoted, is that pedophiles will always find new material on the net. Just like actual, normal porn, people will put it out.
With AI-generated content, at least there’s no actual child being abused, and it can feed the need for ever new material without causing harm to any real person.
I find the thought of kiddie porn abhorrent, and I think for every offender who actually assaults kids, there are probably a hundred who get off of porn, but this porn has to come from somewhere, and I’d rather it’s from an AI.
What’s the alternative, hunt down and eradicate every last closeted pedo on the planet? Unrealistic at best.
Boy this sure seems like something that wouldn’t be that hard to just… do a study on, publish a paper perhaps? Get peer reviewed?
It’s always weird for me when people have super strong opinions on topics that you could just resolve by studying and doing science on.
“In my opinion, I think the square root of 7 outta be 3”
Well I mean, that’s nice but you do know there’s a way we can find out what the square root of seven is, right? We can just go look and see what the actual answer is and make an informed decision surrounding that. Then you don’t need to have an “opinion” on the matter because it’s been put to rest and now we can start talking about something more concrete and meaningful… like interpreting the results of our science and figuring out what they mean.
I’d much rather discuss the meaning of the outcomes of a study on, say,
AI Generated CSAM's impact on proclivity in child predators
, and hashing out if it really indicates an increase or decrease, perhaps flaws in the study, and what to do with the info.As opposed too just gesturing and hand waving about whether it would or wouldn’t have an impact. It’s pointless to argue about what color the sky outta be if we can just, you know, open the window and go see what color the sky actually is…
I love your enthusiasm for research but if only it were that easy. I’m a phd researcher and my field is sexual violence. It’s really not that easy to just go out and interview child sex offenders about their experiences of perpetration.
that AI CSAM is not really that different than “Actual” CSAM
Ok so I understand instead of focusing on the important part, you know, the children being harmed and exploited, you instead focus on the morality of viewing pornography.
Very telling.
AI CSAM is not really that different than “Actual” CSAM
How do you not see how fucking offensive this is. A drawing is not really different from a REAL LIFE KID being abused?
It will still cause harm when viewing
The same way killing someone in a video game will cause harm?
And is still based in the further victimization of the children involved.
The made up children? What the hell are you talking about?
Some have compared pedophilia and child sexual assault to a drug addiction
No one sane is saying actually abusing kids is like a drug addiction. But you’re conflating pedophilia and assault. When it’s said pedophilia is like a drug addiction, it’s non offending pedophiles that is being discussed. Literally no one thinks assaulting kids is like a drug addiction. That’s your own misunderstanding.
Can anyone link me articles talking about this?
About what exactly? There’s 0 evidence that drawings or fantasies cause people to assault children.
I don’t get it, it seems many people want to condemn all forms of child porn, seemingly to avoid downvotes, because for some reason the internet community can’t see that AI generated images don’t harm anyone.
Now that the hard right can’t morally denounce gay people as abominations, they have moved on to trans and pedos. People they’re sure that no one will defend.
Thankfully they’re wrong about the trans community, but zero people are going to come forward to try to defend pedos.
So they have a perfect target, and have been hammering the propaganda posts hard for about 3 years now.
Some innocents have been caught in the backlash like stupid people that can’t tell the difference between a pedo and a pediatrician before they set his car on fire.
Someone is going to die publicly because of this growing hatred, and everyone will just claim they deserved it.
I’m not sure what you mean by ‘defending’ pedophiles. They have a right to exist, and to feel validated in their attraction (which they do not control), but no right to have sex with children.
So what happens when a non-offending celibate pedophile who has spent their life struggling against their urges, and gets outed and killed having never touched a child?
Who will come forward and condemn the killers?
No one.
I would certainly condemn the killers. But you’re right, I feel a large segment of the online population wouldn’t.
Back in uni I had a roommate that was a celibate pedophile, great kid, brilliant programmer and always kind with a good sense of humor.
And a chronic alcoholic since the age of 14 as a coping mechanism.
None of us ever even knew until back in 2006 he went to school therapist to try to learn better coping mechanisms than getting blackout drunk every day at 7pm sharp.
She deemed him a threat and contacted FBI because apparently patient confidentiality in the U.S. doesn’t protect pedophiles. Since he had a niece he had never met (on purpose) on the other side of the continent, she felt validated in her actions.
They came and took him into custody. It wasn’t an arrest, just remanding to mental healthcare for evaluation against his will.
The officers picking him up were pretty loud about the fact they were escorting a pedophile. Made some coarse jokes about it as they walked him out. Took his computer of course.
A week later he comes back, broken AF. Calls together me and two other people he considered friends, and laid out the whole situation.
He had struggled with his desires his entire life, and went to monumental lengths to eliminate even the chance for contact. Never touched a kid, never used CSAM or even hentai. (I can confirm that as the fact his PC was clean of anything even remotely naughty was already a bit of a folklore legend around the dorm.) and vowed he would maintain this lifestyle forever.
He tried same age relationships, and some were ok but none lasted more than six or seven months.
Of course the psych eval and situational examination cleared him of any suspicion, but the damage had already been done and his parents picked him up that afternoon. If it wasn’t for campus security walking him out he would have been mobbed by the dozens of angry students that had heard the worst part of what happened.
The school even tried clearing his name later but it only made his memory more of a laughing stock.
We kept in touch for a few months, mainly through Steam as we were both avid gamers.
Then, one day he just stopped logging in. At the time I was too scared to call his family so I just waited. 16 years later he’s still offline.
Peyton I miss you man.
p.s. literally zero consequences for the therapist for ruining a bright kids life.
I’m so sorry, that’s such a sad story.
how are the drawings made
[This comment has been deleted by an automated system]
To those, who say “no actual children are involved”:
What the fuck the dataset was trained on then? Even regular art generators had the issue of “lolita porn” (not the drawing kind, but the “very softcore” one with real kids!) ending in their training material, and with current technology, it’s very difficult to remove it without redoing the whole dataset yet again.
At least with drawings, I can understand the point as long as no one uses a model or is easy to differentiate between real and drawings (heard really bad things about those doing it in “high art” style). Have I also told you how much of a disaster it would be if the line between real and fake CSAM would be muddied? We already have moronic people arguing “what if someone matures faster that the others”, like Yandev. We will have “what if someone gets jailed after thinking their stuff was just AI generated”.
Even regular art generators had the issue of “lolita porn” ending in their training material
Source? I’ve never heard of this happening. I feel like it would be pretty difficult for material that’s not easily found on clearnet (where AI scrapers are sourcing their training material from) to end up in the training dataset without being very intentional.
Sorry no, you are just plain wrong here when it comes to training data.
Zero public AI image generators used CSAM as training material.
People like that are pedo apologists and the fact that they’re not being banned from the major Lemmy instances tells us all we need to know about those worthless shitholes.