Ageless Linux
Software for humans of indeterminate age. We don't know how old you are. We
don't want to know. We are legally required to ask. We won't.
Including The Ageless Device
A physical computing device designed to satisfy every element of the California
Digital Age Assurance Act's regulatory scope while deliberately refusing to comply with its
requirements. The device costs less than lunch and will be handed to children.
This feels kinda fascinating: Facebook ad for "Granola.ai" has a testimonial from Deedy, partner at Menlo Ventures: "Granola is one of the best made "AI" apps that I've used this year."
Is AI as a phrase becoming poisoned enough that it's getting quoted?
https://www.facebook.com/perma...E7rHBHqBVb5mQl&id=61579723227585
Stanford Law School: Designed to Cross: Why Nippon Life v. OpenAI
Is a Product Liability Case
Graciela Dela Torre settled a long-term disability claim with prejudice in
January 2024. Feeling she had been misled by her attorney, she uploaded his correspondence
to ChatGPT. The chatbot validated her distrust. She fired her lawyer, attempted to reopen
the settled case, and filed dozens of motions that courts found served no legitimate legal
purpose. In March 2026, Nippon Life Insurance Company of America sued OpenAI for $10.3
million.
Mark Dominus linked to the actual complaint.
Unfortunately, I don't think $10.3M is nearly enough, unless it opens up the floodgates
against OpenAI's malfeasance.
fenchelmit
@fen@zoner.work
heard "be the elephant you want to see in the room" earlier and gosh
if that hasn't stuck with me
maxine 🇵🇸
@maxine@hachyderm.io
LLM users respect a chatbot more than potential contributors is the worst part
of all this. Everyone was capable of writing basic docs all along. They just didnt want to
for a fellow human.
I dont know what exactly is it when you treat people as things and things as
people, but it sure is fucking gross.
Oh, hey, it turns out that removing all skill and turning your pipeline over to commodity
generation that anyone who wants that kind of slop can do themselves might have
consequences: Futurism": BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI.
Now, three years after its AI pivot, the writing is on the wall. The company
reported a net loss of $57.3 million in 2025 in an earnings report
released on Thursday. In an official
statement, the company glumly hinted at the possibility of going under sooner rather than later, writing that there is
substantial doubt about the Companys ability to continue as a going concern.
Via and via.
Add this to your morning's comics: The Joy Of Tech: Support
Group for AI Chatbots. Fediverse link.
Ars Technica: Supply-chain attack using invisible code
hits GitHub and other repositories. As David Gerard points out
it's kind of a rehash of the old (March 2024) using Unicode tags for prompt injection.
This one almost needs its own post. AI changes how you think: Cornell Chronicle: AI assistants can sway writers
attitudes, even when theyre watching for bias
Previous misinformation research has shown that warning people before theyre
exposed to misinformation, or debriefing them afterward, can provide immunity against
believing it, said Sterling Williams-Ceci 21, a doctoral candidate in information science.
So we were surprised because neither of those interventions actually reduced the extent to
which peoples attitudes shifted toward the AIs bias in this context.
Science Advances: Biased AI
writing assistants shift users attitudes on societal issues
In two large-scale preregistered experiments (N = 2582), we exposed participants
writing about important societal issues to an AI writing assistant that provided biased
autocomplete suggestions. When using the AI assistant, the attitudes participants expressed
in a posttask survey converged toward the AIs position. However, a majority of participants
were unaware of the AI suggestions bias and their influence. Further, the influence of the
AI writing assistant was stronger than the influence of similar suggestions presented as
static text, showing that the influence is not fully explained by these suggestions,
increasing accessibility of the biased information. Last, warning participants about
assistants bias before or after exposure does not mitigate the attitude-shift effect.
Via
It is fascinating watching singers try to transpose, and shift mode, instead.
Still an Emacs user. Beware the IDEs.
I would happily trade some Midwest weather here in March.
(A screen capture of the National Weather Service forecast for Petaluma California, showing 85 (Fahrenheit ) today, 85 tomorrow, 88 Tuesday, and 87 Wednesday and Thursday.)
Just went to Katherine Rhinehart's talk on auto oriented Petaluma development, and I appreciate the historical interest, but I have trouble seeing those buildings as anything but a monument to lead pollution and the smell of unburned hydrocarbons.