Flutterby™! (short)

Friday March 6th, 2026

406 error message Dan Lyke / comment 0

Could swear I linked to this, but I can't find it, so... https://406.fail

Network Working Group                                BOFH Task Force
Request for Comments: 406i                             February 2026
Category: Imaginary Standard
Obsoletes: Basic Patience

     
         The Rejection of Artificially Generated Slop (RAGS)
                   [ERROR 406i: AI_SLOP_DETECTED]

Anyone know if the MacBook Neo runs Dan Lyke / comment 1

Anyone know if the MacBook Neo runs MacOS apps, or if it's a glorified iPad? A friend is excited about it, but only if it'll run https://squaredesk.net , and I don't have the tuits to try to make an iOS port right now...

Sam Altman eyes Dan Lyke / comment 0

Killa Koala @dshan@mastodon.au

SAM ALTMAN EYES

(With apologies to Jackie De Shannon, Donna Weiss and Kim Carnes)

a riff on Bette Davis Eyes, and I'm gonna throw a "Betty" in here so that I can more easily find it later.

Ahhh Dan Lyke / comment 0

Ahhh, Facebook Marketplace listings: "Brass ... is the gold standard..."

OMG Dan Lyke / comment 0

OMG. I'm digging through various documentation for configuring AI "Agents", and Microsoft Copilot actually uses configured trigger phrases, apparently with string matching, to figure out when to trigger a particular configuration. Like "will it rain", "today's forecast", "get weather", etc.

Google fights climate change Dan Lyke / comment 0

Google pledges roughly three hours of its annual profit to fight climate change

Alphabet, Google’s parent company, reported $132 billion in net income in 2025. Google's five-year, $50 million pledge works out to about three hours of that. The company is also set to spend billions building massive data centers for AI that it claims are more resource conscious than others. So far, Google’s AI infrastructure buildout drove an 11 percent rise in the company's total emissions last year.

Thursday March 5th, 2026

Some more AI talk Dan Lyke / comment 0

As I'm trying to scope out the current state of AI agents, have some LLM links and opinion worth reading from Sean Connor:

GitHub issue title compromises npm package via triage bot Dan Lyke / comment 0

Wheee: A GitHub Issue Title Compromised 4,000 Developer Machines

For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine without consent. Approximately 4,000 downloads occurred before the package was pulled1.

The interesting part is not the payload. It is how the attacker got the npm token in the first place: by injecting a prompt into a GitHub issue title, which an AI triage bot read, interpreted as an instruction, and executed.

Memory errors are more common than you think Dan Lyke / comment 0

Thread from Gabriele Svelto @gabrielesvelto@mas.to about using Firefox crash reports to try to quantify RAM failures, and coming to the conclusion that:

In other words up to 10% of all the crashes Firefox users see are not software bugs, they're caused by hardware defects! If I subtract crashes that are caused by resource exhaustion (such as out-of-memory crashes) this number goes up to around 15%. This is a bit skewed because users with flaky hardware will crash more often than users with functioning machines, but even then this dwarfs all the previous estimates I saw regarding this problem.

persistence of advertising in LLMs Dan Lyke / comment 0

And here we go: Manipulating AI memory for profit: The rise of AI Recommendation Poisoning

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

Why pay the LLM vendors for "advertising" for such subtle biases to be inserted, when you can do it by tricking the LLM assistant to doing it directly?

Via Bruce Schneier, from Meuon on the Chugalug mailing list.

Out of Office Experience Dan Lyke / comment 0

Jeff Forcier @bitprophet@social.coop

OH: "You want me to go back to the office? The same thing that killed Ayatollah Khamenei?"

Office dog is awesome and cuddly and I Dan Lyke / comment 0

Office dog is awesome and cuddly and I appreciate that she comes to me for scritches and when she thinks it's time for lunch, but that somewhere between 3:30 and 4:30 afternoon fart is... somethin' else.


Flutterby&tm;! is a trademark claimed by
Dan Lyke
for the web publications at www.flutterby.com and www.flutterby.net. Last modified: Thu Mar 15 12:48:17 PST 2001