pull down to refresh

Don't want to become the stacker known as anti-AI, but this study really does match the experience i was describing yesterday (#1007658).
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
87 sats \ 3 replies \ @Scoresby 23h
I wonder if it is not laziness but boredom. I find that Chat is often too verbose (even when I ask it not to be), and I skim the answers. Reading the answers it provides feels like a chore because it is so darn boring. Even when I'm interested in the information it's pulled up for me. None of the agents I have used have (yet) learned how to provide consistently interesting writing.
I imagine if the researchers had placed a restriction on the subjects using Chat such that they had to do their best to make it look like Chat hadn't written the essay, they would have performed significantly better. People like puzzles, not factory jobs.
reply
I agree specifics in the testing protocol will affect results...
I'm sure we'll see hundreds of similar studies come out, it's low hanging fruit and makes for good PR clickbait to please the fundings agencies. I'll try to refrain from sharing every iteration on this same topic~~
I've just corrupted my algorithm recommendations for the months to come just by clicking on this one link.
reply
79 sats \ 1 reply \ @Scoresby 20h
Another perspective: not lazy, not bored, Chat causes us to think more like machines: