20 sats \ 5 replies \ @k00b 25 Sep \ on: Nostr content moderation? nostr
The only example of clientside filtering I'm aware of is in Amethyst - if X amount of your follows flagged something, they wouldn't show it.
I suspect most of the main relays are doing AI moderation. For illegal content, it probably works really well.
Only AI i know of is text-only
reply
reply
I am aware of the concept, but not aware of whether/which services are being used for this, and/or whether these are things developed in-house or with other types of automation providers.
reply
Most are probably using cloud APIs. I've looked at AWS's before: https://aws.amazon.com/rekognition/content-moderation/. I'm sure there are a lot of them available.
It should be fairly easy to run their own models too. This nsfw detecting model is the most popular image classifier on hugging face: https://huggingface.co/Falconsai/nsfw_image_detection
reply
Looks like OpenAI can do it now: https://openai.com/index/upgrading-the-moderation-api-with-our-new-multimodal-moderation-model/
reply