pull down to refresh

"The Blake Lemoine incident is remembered today as a high‑water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. "

Michael Pollan is well-known for multiple books on food production and the food/farming industry. I read a lot of his stuff when I studied agribusiness and agriculture management. It's a bit of a leap to see him opining on tech, much less AI. He's a deep researcher, which is obvious in his books, but I'm curious how this leap was supported with experience or if he's just parroting a particular angle he's hooked on.

138 sats \ 1 reply \ @freetx 2 Mar

A interesting project is https://github.com/rasbt/LLMs-from-scratch

It basically walks you thru setting up a toy-LLM from scratch using python. One of the real benefits of this exercise is that you start to understand at a deeper level what the LLM is doing.

Long story short, it is autocorrect++

It certainly is uncanny how much it can simulate human writing (which then hacks our brains into thinking it conscious), but there is no "self" there, there is no "agency", the LLM doesn't have a will or any desires, nor does it actually understand anything. Its a very very very large pattern matcher. When you sit there looking at the blinking cursor, there is nothing going on at the other end of the connection.....just a server with some bits in its memory somehwere.

However, humans will attribute consciousness to it. Its the great danger of the tech....not that its going to become self-aware and kill us, but that we will trick ourselves into thinking its self-aware.

reply
43 sats \ 0 replies \ @Aeneas 5h

Yes, absolutely. I've been encouraging anyone and everyone involved in LLM hype to just spend some time learning how the sausage is made. Just a little bit is enough to see they're not and can't be conscious.

You'll learn their obvious limitations, and hence their actual uses. So you'll use the LLM adequately.... which means you certainly won't "Ask the AI for its opinion about your recent argument with your wife" because you'll know it has no opinion—it doesn't think shit about fuck.

reply

while natural stupidity is an all time high, right now

reply