It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.
Like the comments on this post here.
https://sh.itjust.works/post/6220815
I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.
My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.
Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.
Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.
Using drugs has no inherent victim. And it is not predatory.
I could go on but im not an expert or a social worker of any kind.
Can anyone link me articles talking about this?
I’m just gonna put this out here and hope not to end up on a list:
Let’s do a thought experiment and be empathetic with the human that is behind the predators. Ultimately they are sick and they feel needs that cannot be met without doing something abhorrent. This is a pretty fucked up situation to be in. Which is no excuse to become a predator! But understanding why people act how they act is important to creating solutions.
Most theories about humans agree that sexual needs are pretty important for self realization. For the pedophile this presents two choices: become a monster or never get to self realization. We have got to accept that this dilemma is the root of the problem.
Before there was only one option of getting a somewhat middleway solution: video and image material which the consumer could rationalize as being not as bad. Note that that isn’t my opinion, I agree with the popular opinion that that is still harming children and needs to be illegal.
Now for the first time there is a chance to cut through this dilemma by introducing a third option: generated content. This is still using the existing csam as a basis. But so does every database that is used to find csam for prevention and policing. The actual pictures and videos aren’t stored in the ai model and don’t need to be stored after the model has been created. With that model more or less infinite new content can be created, that imo does harm the children significantly less directly. This is imo different from the actual csam material because noone can tell who is and isn’t in the base data.
Another benefit of this approach has to do with the reason why csam exists in the first place. AFAIK most of this material comes from situations where the child is already being abused. At some point the abuser recognises that csam can get them monetary benefits and/or access to csam of other children. This is where I will draw a comparison to addiction, because it’s kind of similar: people doing illegal stuff because they have needs they can’t fulfill otherwise. If there is a place to get the “clean” stuff, much less people would go to the shady corner dealer.
In the end I think there is an utilitarian argument to be made here. With the far removed damage that generating csam via ai still deals to the actual victims we could help people to not become predators, help predators to not repeat, and most importantly prevent or at least lessen the amount of further real csam being created.
[This comment has been deleted by an automated system]
You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):
Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.
You do have some interesting details though:
This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.
This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.
[This comment has been deleted by an automated system]
I didn’t know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn’t know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that’s even more horrifying.
Didn’t even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.
[This comment has been deleted by an automated system]
Except there is a good bit of evidence to show that consuming porn is actively changing how we behave related to sex. By creating CSAM by AI, you create the depiction of a child that is mere object for the use of sexual gratification. That fosters a lack of empathy and an ego centric, self gratifying viewpoint. I think that can be said of all porn, honestly. The more I learn about what porn does to our brains the more problematic I see it
AI CSAM will not create new pedophiles, but it may keep existing pedophiles from encouraging a disgusting market of child exploiters.