Reclaiming AI as a theoretical tool for cognitive science
2025-05-19 03:24:51.363008+02 by Dan Lyke 0 comments
Work is doing a bit of exploration to incorporate "AI", because in this climaate we have to address that, so we've been grafting it on to various features. Our product incorporates web browsing, so I've been building some LLM enhanced browsing capacity, things like being able to tell a web browser "Find the monthly statement download on this website" and whatnot, and learning how to take a process which involves some history and context, and figure out how to drive it with a process that is essentially one-shot.
The fact that I need to manage the knowledge, the context, the history, and feed any compression and process from that back into the next query is making me very aware of the ways in which LLMs are not intelligence.
So it's good to read PsyArXiv: Reclaiming AI as a theoretical tool for cognitive science Iris van Rooij, Olivia Guest, Federico G Adolfi, Ronald de Haan, Antonina Kolokolova, and Patricia Rich
... as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.
Which I got to by way of Iris van Rooij's BlueSky post, in the ensuing thread you can see all of the reply guys saying they've read the paper when they clearly haven't, and got to that Dr. Sabrina Mittermeier @smittermeier.bsky.social who summarized as:
TL;DR: AI is so much dumber than you think, aka it is not actually „intelligent“ at all, it can‘t remotely do what most people seem to think it already can, it‘s just good at faking human „thinking“. There is no ghost in the machine. Please stop falling for the grift.
The difficulty, of course, is that there are some things that these generative techniques can do well, and can probably even do ethically (I'm thinking about things like texture fill, and a good portion of embedding search and manipulation can respect the source), and finding those things amongst the noise and glitz is tough.