Other samples:
Android: https://github.com/nipunru/nsfw-detector-android
Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw
Keras MIT https://github.com/bhky/opennsfw2
I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.
What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?
Edit:
There’s also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .
Python package MIT: https://pypi.org/project/opennsfw-standalone/
NSFW != porn
I wish there were such detectors for other triggering stuff, like gore, or creepy insects, or any visual based phobia. Everyone just freaks out about porn.
Actually am looking at this exact thing. Compiling them into an open source package to use on Swift. Just finished nsfw. But everything you mentioned should be in a “ModerationKit” as well. Allowing users to toggle based on their needs.
Sounds good.
To be fair, most non-porn “NSFW” is probably “NSFL”. So NSFW in its exclusive usage is almost entirely porn.
Tho some people consider nudity art, without including anything sexual about it
In many cultures around the world nudity in itself isn’t considered inappropriate or sexual.