42 sats \ 0 replies \ @hn OP 21 Jan
This link was posted by ink404 1 day ago on HN. It received 282 points and 448 comments.
reply
If this is in any way successful, it should be possible to mitigate. It looks like this just imperceptibly perturbs the image to confuse feature extraction. Neat, but for many kinds of training, you may add a lot of noise or remove information so you can have more training data and the model generalizes better. Those types of trainings should be resistant to this type of poisoning.
It’s probably very easy to detect these types of perturbations and smooth them out. basically if it looks the same to the eye, you should be able to make it look the same to your convolutional kernel or whatever.
My heart truly goes out to artists, designers, editors, and anyone else that makes stuff that’s easily replicable with AI, even programmers to an extent. The shoggoths are here and we have to adapt.
reply