pull down to refresh

Sure a purist could insist on imagining it all, maybe even acting it out themselves, but it's hardly the case that the movie somehow robbed you of your ability to think.
The value of a good book, or a good movie, is not that it is taking away your imagination, it is a pronunciation of an idea. A vision that is shared with you. An influence. Hell there are TV series that have influence on me.
Now I'd argue that an LLM reflects. It can be a sounding board, but you have to have already had the idea. Or someone else has written something on whatever you're "chatting" about that the RNG happens to settle upon boosting just enough to fit into the autocorrect span.
So despite the chat in the bot's class name, is there really a conversation if you're just talking to the mirror? Is there a vision being shared?
100 sats \ 1 reply \ @Scoresby OP 4h
It's a good point. Chat doesn't produce thoughtful responses. I agree that talking to chat is just a different thing than reading a story or watching a movie. I think the author of the article did a better job than I with the analogy.
I appreciated the idea that chat is yet another way of designing the world around us to take care of things we don't need to think on so that we can spend more time thinking on exciting things. As well as his point that we aren't working with some finite lump of thought which is eaten away by chat.
reply
111 sats \ 0 replies \ @optimism 2h
I agree with that part of it. I just do not feel comfortable with the comparison, not yours, but theirs. Because ChatGPT isn't an influencer. Or... shouldn't be.
I'm of 2 minds on this subject overall:
  1. I know from experience that it is easy to defer gathering knowledge to an LLM and that it can be a useful tool
  2. I also know from experience that it can harm, because I've felt it and if I'm 100% honest, I still feel it a little, depending on the subject. 1
There is no precedent for automating cognition, only science fiction, so there is no best practice, no guidance. You cannot be a trained chatbot user. We're making this up as we go, and the painful thing is, so are these "AI researchers". Them not reaching AGI with gpt-5 means they have no fucking clue what they are doing.
Now, if you're an old guy like me the worst thing that can happen is that I completely destroy myself; whatever... I've done my duties. But if I were 30 years younger, or 40, this could have a lasting impact, especially if it doesn't work. And it definitely doesn't work as advertised.
Do we get the damage done to us by some silicon valley scammer without the ultimate benefit? Is that the bottom line?

Footnotes

  1. For example, I right now cannot be bothered to (tediously) write my own data pipelines anymore. I still wrote down the R cubing and charting "code" for that rpi4 analysis I did the other day, but I probably should have vibed that too and saved my energy for solving non-trivial problems.
reply