pull down to refresh

Be honest, does your LLM usage make you feel like you're getting dumber?
I've heard friends claim they don't think critically as often, I find myself prompting LLMs with ever-sloppier grammar and punctuation that I never would have used before (even incomplete sentences that don't make sense), I hear people rationalizing their stance in arguments with "ChatGPT said so", and I wonder if we're outsourcing our thinking enough to decay the neural networks in our brains.
Sort of how access to step-by-step GPS instructions have hurt people's abilities to figure out which way is North, or how not exercising makes people out of shape.
As a related note, anytime I come across an interesting tweet, the first reply I see is someone asking "Grok, what does this mean?".
295 sats \ 0 replies \ @nullcount 6h
It doesn't matter how "hard" you work. Or how wrinkly your brain is. Nobody gives a shit if you're "actually smart". People only care about results.
If the "result" of using LLMs is that you write better emails, then, no, you're not getting dumber. You're getting better at communication.
If the "result" of using LLMs is that you back your arguments with appeals to authority, i.e. "I'm right because GPT said so" (a logical fallacy), then maybe you are getting dumber.
"Computers are a bicycle for the mind" -- Steve Jobs
If the result of using a bike is traveling faster, and you use that result to go to more places and get more cardio, then a bike is making you healthier.
If the result of using a bike is traveling faster, and you use that result to avoid cardio and "waste" any time that you've saved while traveling, then a bike may be making you sicker.
One of the earliest critiques of new information technology comes from a myth written by Plato:
"This invention [written language] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them." — Plato, Phaedrus
Large Language Models are part of the next technological evolution of human language.
Spoken words enable people to transmit information in real-time.
Written words enable people to transmit information into the future, without relying on human memory.
LLMs enable transmission of information into the future, without relying on human memory, AND without requiring another human to assimilate the information verbatim.
The result of written language may have "weakened" our ability to memorize. Just like LLMs may weaken our ability to assimilate information. But the result of writing means people can "record" more information than they could without. And the result of LLMs means we can assimilate more information than ever before.
reply
I'm afraid so.
I find myself prompting LLMs with ever-sloppier grammar and punctuation that I never would have used before (even incomplete sentences that don't make sense)
You too eh?
I sometimes don't even prompt it now, but just copy / paste in some text or a screenshot with "?" (or nothing) and it knows what I want.
Or... maybe... I don't even know what I want, and I've outsourced that as well. "Hey robot, think about this for me."
My spelling has deteriorated since autocorrect spread everywhere, and it's probably getting worse now because I don't even bother to spell correctly when using an LLM. I just speedmash the keyboard and it parses that sloppy input just fine.
Maybe there's a competitive edge to be found here for those who can avoid overusing these tools, as most "creative" output trends towards the mean.
Thanks for the wakeup call! Time to reread Nicholas Carr.
From his book, The Glass Cage:
“When an inscrutable technology becomes an invisible technology, we would be wise to be concerned. At that point, the technology's assumptions and intentions have infiltrated our own desires and actions. We no longer know whether the software is aiding us or controlling us. We're behind the wheel, but we can't be sure who's driving.”
...
“If we’re not careful, the automation of mental labor, by changing the nature and focus of intellectual endeavor, may end up eroding one of the foundations of culture itself: our desire to understand the world. Predictive algorithms may be supernaturally skilled at discovering correlations, but they’re indifferent to the underlying causes of traits and phenomena. Yet it’s the deciphering of causation—the meticulous untangling of how and why things work the way they do—that extends the reach of human understanding and ultimately gives meaning to our search for knowledge. If we come to see automated calculations of probability as sufficient for our professional and social purposes, we risk losing or at least weakening our desire and motivation to seek explanations, to venture down the circuitous paths that lead toward wisdom and wonder. Why bother, if a computer can spit out “the answer” in a millisecond or two? In his 1947 essay “Rationalism in Politics,” the British philosopher Michael Oakeshott provided a vivid description of the modern rationalist: “His mind has no atmosphere, no changes of season and temperature; his intellectual processes, so far as possible, are insulated from all external influence and go on in the void.” The rationalist has no concern for culture or history; he neither cultivates nor displays a personal perspective. His thinking is notable only for “the rapidity with which he reduces the tangle and variety of experience” into “a formula.”
reply
I'm now able to ask questions I would have been too anxious to ask a teacher, a mentor or a group of people before.
To answer your question, no. I'm asking questions and learning at a much faster rate than I would have otherwise.
reply
for me, no. my brain is kept sharp by always having new foreign words to learn (bonus of living in a forign country), playing music, reading books and learning about new topics like bitcoin mining etc
I have chatGPT open constantly and use it for all sorts, from light design work to asking various questions.
a dumb person is going to remain dumb, ai or not, so i think the onus of not getting dumber is on the individual
reply
Neural pathways are being rewired, first they brought us calculators in our pockets and we lost the ability to do mental maths, and we said nothing Then they brought us interconnectivity and a hive mind in our pocket, and we lost the ability conduct proper dialogue and we said nothing Now they're giving us a tool that flattens information based on the most widespread consensus and we're outsourcing out thinking and we will say nothing because we will lose the ability to form words and sentences
reply
10 sats \ 0 replies \ @carter 5h
I think its because im getting older. I can't do as much in a day
reply
10 sats \ 0 replies \ @sox 7h
The sloppier grammar and punctuation started with Google for me. I think it’s a double-edge sword: I can think quicker with keywords but I speak like I’m drunk
reply
53 sats \ 0 replies \ @nullama 14h
I think it depends.
On one side, I do agree that having a system that works with typos and incomplete text ends up with the user writing incomplete inputs filled with typos...
But on the other side, I've found that the best types of answers you get from these LLMs, is when you actually write a well crafted text, in which you specify all the important things that matter to you. I guess this might be what people call prompt engineering. When you're interacting with the LLMs in this way, it actually makes you think a lot more about what you're asking, and makes you write a much better question, which sometimes makes you get to your answer even before sending the prompt as you figured out while thinking it through.
reply
98 sats \ 1 reply \ @Car 14h
In the early days of ChatGPT I did experiment of doing what you described I think everyone did, as far as getting dumb I think it makes us more efficient and intelligent because naturally curious by nature people you can ask it all sorts of questions and get answers you wouldn’t know, I still use Grammarly to write because it allows me to flow without changing my tone or voice.
It also cleans up some basic grammar still and punctuation but it cost money to use AI features which I don’t pay for. I then like to take this and drop it in a prompt and ask it to help with structure and form and any suggestions.
I don’t take all the ideas but I find that it offers some really good sentences that makes my writing sing.
I look at AI as pedals for guitars or plugins for effects in music mixing. The point for me is to try record the best first take as I possibly can with incredible focus. Then refine but if it gets to a point where if it’s a completely comped track or effect heavy it kinda drowns out everything else.
reply
41 sats \ 0 replies \ @kepford 14h
Guitar pedals. Great analog for Chatbots
reply
110 sats \ 0 replies \ @optimism 15h
I've felt that so I'm trying to actively resist this happening (by never relying on an LLM answer: I'm resisting Gell-Mann amnesia concurrently) and I'm seeing cognitive decline happen around me, but I'm not sure it's just LLMs: I've felt less able to focus since I got covid in 2022 which was before LLMs, but even that may be a false attribution because it can also be just age related. I do feel that leaving the bird app helped me in critical thinking, because that lessened my exposure to idiocy, and I haven't scrolled nostr for the same reason for a while either.
It's tough; I'm not sure what to attribute and how to prevent decline. But trying.
reply
I don’t use LLMs like this, so no (not by LLMs anyway)
reply
I think so, but I can't blame LLM's for it.
reply
10 sats \ 0 replies \ @OT 16h
I haven't really started using LLM's so I guess I can say no.
reply
10 sats \ 0 replies \ @Riberet 12h
I have not noticed any cognitive decline because of it, I use LLMs little for now.
reply
ChatGPT said so
This one kills me and makes me think of staying away from those people. 90% of the cases and after some fights in convincing people to think, I end up slowly parting away from people ending arguments like this. I never did wrong with this.
reply
What was the way to figure out which way is north before GPS (barring a use of a compas)?
reply
53 sats \ 1 reply \ @teremok 7h
Hundreds of cues.
My mom always knows north. The church pointed it, a park, a landmark, buildings, etc.
In her brain she has thousands of reference points. And with every turn, the brain kind of knows.
I have a friend like that too, and when he uses the GPS he is god like.
If you read the book Be Careful There Are Snakes, the Amazonian Piranha were very skillful at knowing directioms based on the direction of the river.
Also there is a thing called the sun, very useful.
reply
I rarely use GPS, I can read (and used in the past) paper maps and rached my destimations without knowing where north was. That's very interesting. Did you have to navigate some large open area without roads, where the direction of north is necessary? I never knew where north was even before GPS...
reply
"ChatGPT said so"
This person is out of bullets and brain cells
reply