pull down to refresh

AbstractAbstract

The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. A theory is proposed that when individuals learn about a topic from LLM syntheses, they risk developing shallower knowledge than when they learn through standard web search, even when the core facts in the results are the same. This shallower knowledge accrues from an inherent feature of LLMs—the presentation of results as summaries of vast arrays of information rather than individual search links—which inhibits users from actively discovering and synthesizing information sources themselves, as in traditional web search. Thus, when subsequently forming advice on the topic based on their search, those who learn from LLM syntheses (vs. traditional web links) feel less invested in forming their advice, and, more importantly, create advice that is sparser, less original, and ultimately less likely to be adopted by recipients. Results from seven online and laboratory experiments (n = 10,462) lend support for these predictions, and confirm, for example, that participants reported developing shallower knowledge from LLM summaries even when the results were augmented by real-time web links. Implications of the findings for recent research on the benefits and risks of LLMs, as well as limitations of the work, are discussed.

Significance StatementSignificance Statement

Might the ease afforded by large language model (LLM) syntheses come at the cost of learning compared with traditional web search? A theory is proposed that because LLM summaries lessen the need to discover and synthesize information from original sources—steps essential for deep learning—users may develop shallower knowledge compared with learning from web links. When subsequently forming advice on the topic, this manifests in advice that is sparser, less original—and less likely to be adopted by recipients. Results from seven experiments support these predictions, showing that these differences arise even when LLM summaries are augmented by real-time web links, for example. Hence, learning from LLM syntheses (vs. web links) can, at times, limit the development of deeper, more original knowledge.

...read more at academic.oup.com

yes, the brain programming using external computer chips & screens (blue light, imperceptible flicker, cat videos, etc) are intricately related to electromagnetic signaling to & from the immortal hydra nanotech biology;

for most of the population, this mind-control results in dumbing down, and a select group of people becomes powerful hackers & attention manipulators;

https://electrostasis.substack.com/p/aibcps-199-chinese-programmer-discloses

fascinating connection to crypto as well; bitcoin fixes this?


caption: the immortal hydra

reply