Jerusalem syndrome for silicon
2025-06-13 17:33:04.281524+02 by Dan Lyke 0 comments
With lots of conversations recently about LLMs, and how people don't seem to have the tools to process what they're seeing, I'm pondering a lot about Baldur Bjarnason's note that avoiding generative models is the rational and responsible thing to do which is a shorter follow-up to Trusting your own judgement on ‘AI’ is a huge risk as I hear a lot of conversations by smart people who, if pressed, would say "these things are a clever hack", but then go on to use them as though they're omniscient entities.
Referencing a Reddit conversations about people losing perspective with ChatGPT, Elf Sternberg highlighted:
"Being able to say 'That sounds nuts' without having a point by point rebuttal is a critical talent for surviving in this world." AIS are not meant to survive in this world.
and as the poster of those conversations Linnea Sterte @decassette.bsky.social noted
some of these are prob mostly lies ppl made up but the 'no I'M the one who made the ai awaken & become sentient' via eliza effect or w e is fascinating as a weird cyberpunk narrativesome of these are prob mostly lies ppl made up but the 'no I'M the one who made the ai awaken & become sentient' via eliza effect or w e is fascinating as a weird cyberpunk narrative
Even the NYT is starting to notice: Justin Hendrix @justinhendrix.bsky.social
Here is a gift link to @kashhill.bsky.social's must read piece on the dangers of OpenAI's sycophantic LLMs and the parasocial relationships people are creating with them.
New York Times: They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. (Gift link)
“What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”