AI cynicism of the moment
2025-09-10 20:05:07.570616+02 by Dan Lyke 0 comments
You remember when everyone (including AI firms) claimed that "Hallucinations" would soon be solved and I got so much shit for arguing that they are a structural property of LLMs?
Now OpenAI releases a paper stating the same and just gets to move on (with all its sycophants).
Really fucking annoys me.
Re OpenAI: Why language models hallucinate
Our new research paper argues that language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty.
The office is abuzz this morning with a demo of some YouTuber doing the Google Workspace "shove my documents at Gemini, get a 'morning zoo' style podcast back out", and I'm thinking about why I'm willing to listen to some of the more ensemble episodes of Switched On Pop and not that. Although, of course, there are any number of podcasts that are a few people talking about a topic that I'm deeply interested in that I switch off from because the speakers just aren't that insightful, and I think maybe there's some insight and intention there that I don't hear in the auto-generated podcasts? Or maybe I'm biasing myself?
Anyway, I continue to struggle with the "this is crap" and "maybe the market thrives on crap" vibes that fill so much of my world today.