LLMs aren't as good for learning as actual reading
2025-11-21 22:58:28.316352+01 by Dan Lyke 0 comments
Expected outcome...
Gizmodo: Learning With AI Falls Short Compared to Old-Fashioned Web Search
In virtually all the ways that matter, getting summarized information from AI models was less educational than doing the work of search.
Science News: Chatbots may make learning feel easy but its superficial
Abstract
The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. A theory is proposed that when individuals learn about a topic from LLM syntheses, they risk developing shallower knowledge than when they learn through standard web search, even when the core facts in the results are the same. This shallower knowledge accrues from an inherent feature of LLMsthe presentation of results as summaries of vast arrays of information rather than individual search links which inhibits users from actively discovering and synthesizing information sources themselves, as in traditional web search. Thus, when subsequently forming advice on the topic based on their search, those who learn from LLM syntheses (vs. traditional web links) feel less invested in forming their advice, and, more importantly, create advice that is sparser, less original, and ultimately less likely to be adopted by recipients. Results from seven online and laboratory experiments (n = 10,462) lend support for these predictions, and confirm, for example, that participants reported developing shallower knowledge from LLM summaries even when the results were augmented by real-time web links. Implications of the findings for recent research on the benefits and risks of LLMs, as well as limitations of the work, are discussed.