First dump of AI links of the morning
2026-03-16 15:59:11.939696+01 by Dan Lyke 0 comments
maxine 🇵🇸 @maxine@hachyderm.io
LLM users respect a chatbot more than potential contributors is the worst part of all this. Everyone was capable of writing basic docs all along. They just didnt want to for a fellow human.
I dont know what exactly is it when you treat people as things and things as people, but it sure is fucking gross.
Oh, hey, it turns out that removing all skill and turning your pipeline over to commodity generation that anyone who wants that kind of slop can do themselves might have consequences: Futurism": BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI.
Now, three years after its AI pivot, the writing is on the wall. The company reported a net loss of $57.3 million in 2025 in an earnings report released on Thursday. In an official statement, the company glumly hinted at the possibility of going under sooner rather than later, writing that there is substantial doubt about the Companys ability to continue as a going concern.
Add this to your morning's comics: The Joy Of Tech: Support Group for AI Chatbots. Fediverse link.
Ars Technica: Supply-chain attack using invisible code hits GitHub and other repositories. As David Gerard points out it's kind of a rehash of the old (March 2024) using Unicode tags for prompt injection.
This one almost needs its own post. AI changes how you think: Cornell Chronicle: AI assistants can sway writers attitudes, even when theyre watching for bias
Previous misinformation research has shown that warning people before theyre exposed to misinformation, or debriefing them afterward, can provide immunity against believing it, said Sterling Williams-Ceci 21, a doctoral candidate in information science. So we were surprised because neither of those interventions actually reduced the extent to which peoples attitudes shifted toward the AIs bias in this context.
Science Advances: Biased AI writing assistants shift users attitudes on societal issues
In two large-scale preregistered experiments (N = 2582), we exposed participants writing about important societal issues to an AI writing assistant that provided biased autocomplete suggestions. When using the AI assistant, the attitudes participants expressed in a posttask survey converged toward the AIs position. However, a majority of participants were unaware of the AI suggestions bias and their influence. Further, the influence of the AI writing assistant was stronger than the influence of similar suggestions presented as static text, showing that the influence is not fully explained by these suggestions, increasing accessibility of the biased information. Last, warning participants about assistants bias before or after exposure does not mitigate the attitude-shift effect.