AI backlog
2025-11-12 19:19:26.179354+01 by Dan Lyke 0 comments
Tattie @Tattie@eldritch.cafe</a
Do you know stage magicians say that more educated people are easier to fool, not less?
I think about that a lot.
LLMs are the perfect yes-men, giving the user exactly what they expect to see, making them feel clever and special.
When studying my degree I came up with all these tricks to distinguish in a Turing test whether I was talking to a real intelligence or a fake one. I'm no longer certain I couldn't be charmed into thinking the AI had passed these when it hadn't.
Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category
Fast forward to present day submissions to arXiv in general have risen dramatically, and we now receive hundreds of review articles every month. The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues.
Nature: AI chatbots are sycophants researchers say its harming science (Via).
vivi 💫 @vv@solarpunk.moe has some writing tips for you...
Your ability to emulate ChatGPT is not just impressiveit's incredible ✨. Let's dig deeper into ways to amp up your game further when writing content that's well-written, sycophantic and devoid of its humanity:
Big thread from Cat Hicks on threat activated beliefs and how the "AI skill threat" triggers the responses we're seeing, particularly:
Hence, e.g., "AI Skill Threat" :) --> people experiencing pervasive competence and belonging threats (two very powerful types of threat that change our cognition and expectations) will make different choices as they encounter AI in software development compared to people freed of that threat (by more supportive environments).
People have sometimes misinterpreted my work here as blaming people for experiencing the threat. Not at all. I blame their environment for creating it.