pull down to refresh

Another title for this article could have been "No, using chatbots doesn't make you think less."
Masley nicely brings up the way we design lots of things to do our thinking for us:
When I approach a door, I have no conscious thoughts at all about how to open it. My active thinking is occupied by other stuff, my arm just subconsciously reaches out to the correct location. In these moments, I have “outsourced” my thinking to the door design, because there was a possible world where I had to actively think about opening the door, and the reason I don’t is how the door was crafted.
The fact that I have outsourced so many possible thoughts to my built environment liberates me to think about higher level stuff, the things I actually find deeply valuable about the world.
This sort of thinking can be extended to using chatbots:
people who worry about how chatbots always involve outsourcing some mental task might not be noticing the gigantic mountain of mental tasks we have already outsourced to civilization
Masley then turns to the example of watching a movie. When we sit down enjoy something like Peter Jackson's version of Lord of the Rings, lots of work has been done for us. Sure a purist could insist on imagining it all, maybe even acting it out themselves, but it's hardly the case that the movie somehow robbed you of your ability to think.
Just like we benefit from specialization in labor, I was benefiting from the cognitive specialization of people who had spent decades thinking about story, images, music, and sets, and this left me with way more things to think about.
And yet many people seem to believe that chatbots are somehow exempt from this trend:
But the author extends this to strongly imply that using ChatGPT at all is causing people to think less, because any cognition the chatbot performs leaves the user with fewer thoughts to think.
Masley does a nice job of acknowledging the nuance here:
There are also clearly cases where it’s very bad to offload our cognition. Things like:
  • Homework.
  • Messages on dating apps.
  • Summarizing a valuable complex book instead of reading it (assuming you had the time and energy to read it and would have benefited from it).
  • Personal connection and close conversation.
It's well worth reading his general dismantling of the idea that if we use chatbots to do some of our thinking there will somehow be less thinking for the rest of us to do.
102 sats \ 2 replies \ @optimism 2h
Sure a purist could insist on imagining it all, maybe even acting it out themselves, but it's hardly the case that the movie somehow robbed you of your ability to think.
The value of a good book, or a good movie, is not that it is taking away your imagination, it is a pronunciation of an idea. A vision that is shared with you. An influence. Hell there are TV series that have influence on me.
Now I'd argue that an LLM reflects. It can be a sounding board, but you have to have already had the idea. Or someone else has written something on whatever you're "chatting" about that the RNG happens to settle upon boosting just enough to fit into the autocorrect span.
So despite the chat in the bot's class name, is there really a conversation if you're just talking to the mirror? Is there a vision being shared?
reply
100 sats \ 1 reply \ @Scoresby OP 1h
It's a good point. Chat doesn't produce thoughtful responses. I agree that talking to chat is just a different thing than reading a story or watching a movie. I think the author of the article did a better job than I with the analogy.
I appreciated the idea that chat is yet another way of designing the world around us to take care of things we don't need to think on so that we can spend more time thinking on exciting things. As well as his point that we aren't working with some finite lump of thought which is eaten away by chat.
reply
100 sats \ 0 replies \ @optimism 46m
I agree with that part of it. I just do not feel comfortable with the comparison, not yours, but theirs. Because ChatGPT isn't an influencer. Or... shouldn't be.
I'm of 2 minds on this subject overall:
  1. I know from experience that it is easy to defer gathering knowledge to an LLM and that it can be a useful tool
  2. I also know from experience that it can harm, because I've felt it and if I'm 100% honest, I still feel it a little, depending on the subject. 1
There is no precedent for automating cognition, only science fiction, so there is no best practice, no guidance. You cannot be a trained chatbot user. We're making this up as we go, and the painful thing is, so are these "AI researchers". Them not reaching AGI with gpt-5 means they have no fucking clue what they are doing.
Now, if you're an old guy like me the worst thing that can happen is that I completely destroy myself; whatever... I've done my duties. But if I were 30 years younger, or 40, this could have a lasting impact, especially if it doesn't work. And it definitely doesn't work as advertised.
Do we get the damage done to us by some silicon valley scammer without the ultimate benefit? Is that the bottom line?

Footnotes

  1. For example, I right now cannot be bothered to (tediously) write my own data pipelines anymore. I still wrote down the R cubing and charting "code" for that rpi4 analysis I did the other day, but I probably should have vibed that too and saved my energy for solving non-trivial problems.
reply
100 sats \ 1 reply \ @ihatevake 51m
I had a 2 hour conversation with ChatGPT on some technical bitcoin topics. That would've been impossible without chatbots, I'd have to wait to attend a bitdevs or something.
reply
why? you can easily have such a conversation in ~bitdevs? just won't be instant answered, but mistakes will be caught when it's done in public.
reply