The wall confronting large language models
2025-09-05 19:54:38.802422+02 by Dan Lyke 0 comments
The wall confronting large language models Peter V. Coveney, Sauro Succi
We argue that the very mechanism which fuels much of the learning power of LLMs, namely the ability to generate non-Gaussian output distributions from Gaussian input ones, might well be at the roots of their propensity to produce error pileup, ensuing information catastrophes and degenerative AI behaviour.
Via Elf Sternberg, who summarized this as:
That LLMs produce non-Gaussian output distributions from Gaussian inputs is the very mechanism that prevents LLMs from ever meeting the standards required of scientific inquiry.