pull down to refresh

Sometimes people ask me why I don't use AI.
First: I haven't read Blindsight, but now I really want to.
Second:
There's nothing wrong with using AI. When you do, you know what you're getting. The transaction is fully consensual.
This makes sense.
But whenever you propagate AI output, you're at risk of intentionally or unintentionally legitimizing it with your good name, providing it with a fake proof-of-thought. In some cases, it's fine, because you did think it through and adopted the AI output as your own. But in other cases, it is not, and our scrambler brain feels violated.
Proof-of-work is an interesting analogy here. The reason it is important is that it's hard to fake. I believe that is the only reason. If we came up with another solution for hard-to-fake, it might be just as good as proof-of-work...or even better (I'm not holding my breath, though).
Does it matter how much proof-of-thought is put into a thing? If some article or story has more proof-of-thought than another, what would that mean? It's almost like we want proof-of-human because we are much more interested to interact with a human than with a bot.
I don't think I care that much whether a person thought for a little or a lot about what they say to me. I just want it to be interesting and novel and challenging and not hollow. Hollow is what I would call the feeling when I read AI slop and it is a bunch of words that do indeed go together, but which don't give me the sense that anyone thought through their implications or arrived at them because of an argument so much as because they "seemed like they should go together." Hollow does feel like poison.
The dead internet is not just dead it's poisoned.
AI slop as poison is something I hadn't considered before. Really makes me want to read Blindsight even more. Thanks for this link!
reply
stackers have outlawed this. turn on wild west mode in your /settings to see outlawed content.