pull down to refresh

This link was posted by claytonwramsey 2 hours ago on HN. It received 182 points and 112 comments.
"The model produces better work. Some of my peers believe that large language models produce strictly better writing than they could produce on their own. Anecdotally, this phenomenon seems more common among English-as-a-second-language speakers. "
I deleted my chatGPT account today. Wednesday, I was fired, because the manager thought that my spoken english was not very good to comunicate with clients (brazilian portuguese is my native tongue). And I noticed today that my halt in learning english, and french, started at the same time of my chatGPT use. 3 years, feeling a little dumber everyday, and just now I understand why.
"I believe that the main reason a human should write is to communicate original thoughts. To be clear, I don’t believe that these thoughts need to be special or academic. Your vacation, your dog, and your favorite color are all fair game. However, these thoughts should be yours: there’s no point in wasting ink to communicate someone else’s thoughts."
I couldn't agree more. Being able to articulate a prompt is a sign that you understand enough to start solving the problem for yourself, but cutting the hard work, with the instant generated response shadows your cognitive potential. I'm feeling way more smart, just by "raw dogging" this post, really trying to write with my own words. Not everyone knows enough of 2 extra languages to communicate well in both, even with small mistakes. I should be proud, and using IA was making me feel lazy and a little ashamed. I'm rediscovering the pleasure in trying to solve things and express myself using my own spaghetti of pink, non-artificial, worms.
reply
43 sats \ 0 replies \ @k00b 4 May
I now circle back to my main point: I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.
This sums up how I feel about the majority of model outputs. I generally read because I want to see someone's thinking and LLMs are still pretty bad at exposing their thinking. I generally read because I'm trying to learn how to think about the thing being written about. I'm not reading because there's a lack of available things to read and I need extra words around facts.
That isn't to say the output is valueless. It seems very useful for augmenting one's thinking and making connections that one couldn't. The majority of the content just lacks most of what I value in content.
reply