Anti AI rant
2026-04-10 16:52:06.504169+02 by Dan Lyke 0 comments
I wrote a kid's homework for them over on Reddit
My complaints center mostly around LLMs, with a slight diversion into generative AI for music and images:
My first complaint is just the quality of the output. I keep having... you know, the kinds of friends who DM you random whackadoodle Substack articles, only now they're DMing acres and acres of LLM generated slop and saying "this is so insightful" and it isn't. It's mediocre writing that often doesn't actually make sense. Really, when you use an LLM to generate prose it's doing the metaphorical equivalent of seven fingered humans, you're just not smart enough to see it.
The second complaint is the outsourcing of thinking. I mean, sure, you can make the argument that these things are analogous to calculators and you don't actually need to do arithmetic, but a lot of what I'm seeing is that people have stopped critically reading the output altogether. Or, if they're coding, they're losing the mental model of the code they're writing. Turning out stuff that appears to work, sure, but they're quickly dropping into delusions about what the LLM can and can't know, and they have no mental model for the code that's actually being generated.
Which, you know, is fine if you don't actually care how things work, but understanding how things work is how we figure out new and novel and interesting ways to use technologies, and that's not coming out of LLMs.
The third is how that ties into the anthropormophization of these things. The literature refers to this as "epistemia", but I see a lot of thinking that the LLM is thinking, and because of the "slot machine" payoff nature of these things that may be often enough to actually be really compelling, but then they use it for something where they get a grievously wrong answer, and the crater is pretty big. And because of well known issues of attention and operator fatigue, there's really no good way to outsource the kind of attention that's necessary to get good output from these things to humans. Use of them will bite you.
(Cue all of the cocky kids saying "skill issue". Dude, if that skill issue could be solved, C would be a safe programming language. Fuck all the way off with that argument.)
Then we get into the ethics of how these things are trained.
The theft of content. I don't even get that cranky about the huge percentage of traffic that's hitting my web servers from AI vendors and making it harder to have personal sites, the use of pirated materials, and remixing of intellectual properties in ways that individual humans would never get away with feels like a different set of rules. Anthropic and OpenAI pirated how many books? And they're getting a slap on the wrist, after huge efforts.
I'm old enough to remember when the record industry went after Napster users. If there were justice applied equally... well...
The power use, from local pollution to climate change to just electricity prices. If there were some sort of good coming out of it, sure, but, as pointed out up-thread, the LLMs are overhyped stupidity (every claim for success from these things has been a lie stemming from overtraining on test data or randomness), and the images are just stupid. Sure, they now mostly get the right number of fingers, but we're gonna burn down the planet for those aesthetics. Eeewww.