CLTR finds a 5x increase in scheming-related AI incidents
2026-03-30 17:44:37.336382+02 by Dan Lyke 0 comments
On the one hand, I wanna link to The Guardian: Number of AI chatbots ignoring human instructions increasing, study says, on the other hand the report from the UK Centre for Long-Term Resilience seems like the sort of thing meant to freak out policy-makers rather than actually be useful.
The trend is striking. The number of credible scheming-related incidents increased 4.9x over the collection period, a statistically significant increase that far outpaced the 1.7x growth in overall online discussion of scheming, and the 1.3x growth in general negative discussion about AI. This surge coincided with the release of a wave of more capable, more agentic AI models and frameworks from major developers.
Like, uh, you wanna normalize that by anything? Additional use? The advent of more long- running systems like OpenClaw?
It's great to say "hey, these things are dangerous, and even technical users are tripping over their shoelaces when use of these ties them together", and I'm all for policy which engages more discussion about these things, but I also think the way the discussion is unfolding is exposing a lot about how policy is made by emotional reaction rather than any sort of real models.