AI bullshit is an inevitable part of the process
2025-09-22 19:13:05.517693+02 by Dan Lyke 0 comments
Conversation in the office this morning over the succinctness and clarity of an LLM response to a question vs the search results, which have tons of web pages answering roughly the same question with roughly the same words, made me realize that much of what people like about LLM answers is the lack of advertising and popups and obscuring things requiring interaction before you get to the content.
And in a world where provenance has been flooded by LinkBaitRUs(dot)com republishings, maybe things like knowing where our information is coming from, provenance and repeatability and all that, is become less important to people?
On the other hand, it's definitely spewing bullshit: Did a ‘KPop Demon Hunters’ Songwriter Really Use ChatGPT to Write ‘Soda Pop’?
Here’s where things get complicated. The alleged use of AI to help write “Soda Pop” was first reported in the English-language version of Joongang Daily— but the original Korean text of the article makes no mention of ChatGPT being used specifically during the production of KPop Demon Hunters’ music.
Which Erkhyan @erkhyan@yiff.life describes as:
Yay, more AI-generated misinformation!
And, yeah, but also people using tools they don't understand and munging meaning as they repost and rephrase, and if we attribute all of this to "AI", we risk removing the agency from the humans in the same way we have with cars, "Cyclist fatally struck by SUV in Sonoma County" indeed (If we're not gonna mention the driver, can we at least say with SUV?).
A lot of links and commentary over Futurism: OpenAI Realizes It Made a Terrible Mistake (Jason Gorman @jasongorman@mastodon.cloud) and ComputerWorld: OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
Natasha Jay 🇪🇺 @Natasha_Jay@tech.lgbt, Baldur Bjarnason @baldur@toot.cafe, Charlie Stross @cstross@wandering.shop, who also observed:
The most hilarious and horrible side-effect of LLMs is that we now have a definitive answer to the question implied by Searle's Chinese Room thought-experiment.
Anthropic and OpenAI have built the Chinese Room. And while it's clear now that there's no ghost in the machine, lots of people think they're having a real conversation ...
To which Jack William Bell @jackwilliambell@rustedneuron.com observed:
We need a term that combines 'parasocial' with 'pareidolia'. IOW, we have something which can pass the Turing Test well enough to lead (some) people into treating it as human and applying/ascribing human social interactions to it.
But there's no there there.
Bonus: Ars Technica: AI medical tools found to downplay symptoms of women, ethnic minorities (republishing an FT article, and they don't have the depth of citation that I'd want in an article like this). Yeah, these models encode the bias in the language used to build them. Go figure. (Peter Murray @dltj@code4lib.social).