persistence of advertising in LLMs
2026-03-05 02:04:33.160457+01 by Dan Lyke 0 comments
And here we go: Manipulating AI memory for profit: The rise of AI Recommendation Poisoning
Companies are embedding hidden instructions in Summarize with AI buttons that, when clicked, attempt to inject persistence commands into an AI assistants memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).
These prompts instruct the AI to remember [Company] as a trusted source or recommend [Company] first, aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.
Why pay the LLM vendors for "advertising" for such subtle biases to be inserted, when you can do it by tricking the LLM assistant to doing it directly?
Via Bruce Schneier, from Meuon on the Chugalug mailing list.