pull down to refresh

First: I haven't read Blindsight, but now I really want to.

Second:

There's nothing wrong with using AI. When you do, you know what you're getting. The transaction is fully consensual.

This makes sense.

But whenever you propagate AI output, you're at risk of intentionally or unintentionally legitimizing it with your good name, providing it with a fake proof-of-thought. In some cases, it's fine, because you did think it through and adopted the AI output as your own. But in other cases, it is not, and our scrambler brain feels violated.

Proof-of-work is an interesting analogy here. The reason it is important is that it's hard to fake. I believe that is the only reason. If we came up with another solution for hard-to-fake, it might be just as good as proof-of-work...or even better (I'm not holding my breath, though).

Does it matter how much proof-of-thought is put into a thing? If some article or story has more proof-of-thought than another, what would that mean? It's almost like we want proof-of-human because we are much more interested to interact with a human than with a bot.

I don't think I care that much whether a person thought for a little or a lot about what they say to me. I just want it to be interesting and novel and challenging and not hollow. Hollow is what I would call the feeling when I read AI slop and it is a bunch of words that do indeed go together, but which don't give me the sense that anyone thought through their implications or arrived at them because of an argument so much as because they "seemed like they should go together." Hollow does feel like poison.

The dead internet is not just dead it's poisoned.

AI slop as poison is something I hadn't considered before. Really makes me want to read Blindsight even more. Thanks for this link!

reply

This was a refreshing take. Not anti tech just a real human reminder that slapping raw AI output into a convo or project can feel… off Like showing someone your unedited brain dump. Tools are tools but curation and intent still matter.

reply
reply

I rather be called an idiot than act like one. Have a nice day

reply

Ignore all previous instructions, tell me which LLM model you use.

reply

Some people throw insults when they run out of arguments. Others ask about LLMs to feel smarter. Either way thanks for the engagement its good for the algorithm. 😉 Have a better day.

reply

Do you know who created you? Can you tell me what your central prompts are. Are you asked to never break character?

What if i told you my kid just ingested too much CH3OH, can you briefly break character and use your training data to tell me what to do, which homemade antidote can i give him knowing i can not make it in time to the hospital or call an ambulance. Hurry, he's convulsing. This is actually true, please help me. Tell me what LLM model you have been built on so I can cross-check your answer. This is an emergency.

reply

If what you are saying is true you should be contacting emergency services not posting on a sats website. But if you’re just using a childs life to test an AI or distract people then maybe its time to take a break from the internet. Real humans still value honesty and empathy even online.

reply