pull down to refresh

207 sats \ 3 replies \ @optimism 10h
people don't seem to take responsibility for the code their LLM spawns.

Exactly. Because of the simulated personification of software - something the labrats at deepmind and openai thought a good thing to do - this has made a complete hole in perceived accountability: the software isn't a legal person, the creator of the software isn't liable because it's just software, the person using the tool and taking action claiming that the tool did it.

What the world really needs is normie clarity on that last one: that you are responsible for your actions. Also if the AI told you to do it, or if you authorize a tool to autonomously execute.

PS: Could you please x-post your future gambling links with the oracle territory? I find advertising gambling sites nasty and I don't want to mute you personally.

reply
Could you please x-post your future gambling links with the oracle territory?

Noted.

From a technical perspective, if I had added ~oracle in the list of territories to x-post, the post would not have appeared on your side (under the assumption that you muted ~oracle) even though ~AI is in the list?

reply
83 sats \ 1 reply \ @optimism 10h

Correct. So then I just don't see it. Which is great.

reply

That's some neat UX

reply
If it is AI that incorrectly assigned the school to be a military target because of outdated data, then it is very much a human error.

Yup. Was going to post the Office "they're the same picture" gif in response to your subject line if you hadn't already made this point in the post.

reply

Exactly :)

reply