Sunday January 25th, 2026
Saturday January 24th, 2026
Silly thing, but feels like it needs noting somewhere. In Stardew Valley's nightly summary of categories of things sold, it provides spaces for 6 digits, displays 7, doesn't show the high digit if you sell 8 digits worth in a category...
A haunting collection of startup disasters, coding catastrophes, and executive decisions that went spectacularly wrong. Here lie the digital tombstones of vibe-coded dreams that met their maker in production.
The device, which Heyneman said does not work, is meant to recognize the unique characteristics of a silicon chip to prevent financial fraud.
Of course what's working is also vibe-coded.
Via.
Friday January 23rd, 2026
LLM-powered cat food
Sycophancy Feast
"Miss Movie Masochist" @socketwench@masto.hackers.town
Housemate:
"The existence of http and https implies the existence of http3, http: Resurrection, and http vs. Predator.
Discuss."
Francisco Tolmasky @tolmasky@mastodon.social
One of the places AI has arguably been most widely deployed is law enforcement, yet no one has pointed to this as evidence of a coming end to police jobs like they do for all other fields. What a fascinating discrepancy.
Seeing where my leakage is on UTF-8 stuff, 'cause something's broken... 🍔🐝❤️🍔🐝❤️🍔🐝❤️🍔🐝❤️🍔🐝❤️
NYC sues Dr. Phils son to block release of extremely problematic NYPD footage
The sensitive footage comes from a now defunct city-backed TV show helmed by Jordan McGraw.
You mean the department that provided the security detail for Eric Adams that was then part of a cryptocurrency kidnapping/extortion scheme has shame? Color me shocked.
Also, release the fucking things, we've known the NYPD was super rotten for decades.
Thursday January 22nd, 2026
Apparently
ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E
8CCC1FB35B501C9C86 is a token used by Claude 4 for
internal testing
Anthropic documents a "magic string" that intentionally triggers a streaming refusal. Starting with Claude 4 models, streaming responses return
stop_reason: "refusal"when streaming classifiers intervene, and no refusal message is included. This test string exists so developers can reliably validate refusal handling, including edge cases like partial output and missing refusal text.
Might end up in my footers here...
OpenAI chair Bret Taylor says AI is probably a bubble, expects correction in coming years.
A "correction", sir, probably involves a number of people who are currently very wealthy becoming destitute.
Via.
The "English Football League" question in today's Timdle may have stumped me, but at least I placed the "Crisis of Third Century" correctly... https://www.timdle.com/daily
Reading about Georgism and coercive economics, and then I see more employer provided housing in the SF Bay Area (this about an SF restaurant, but context included a convenience store owner in Sebastopol), and...
https://www.sfchronicle.com/fo...tment-san-francisco-21291046.php
Wish this piece went deeper, but still a fascinating glimpes at Australia's sexual culture of the early 1970s: Satirical erotic newspaper discovered inside heritage Hobart hotel.
It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory- free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).
"They straight up took my data and used it against me to capture me further and make me even more delusional."
Salesforce CEO Marc Benioff calls out AI models as 'suicide coaches'
In three public appearances, the executive of San Franciscos largest tech company used the phrase suicide coach to describe the chatbot from Character.AI a Menlo Park startup sued by multiple families over their childrens mental health crises. Benioff discussed the issue with TV interviewers from CNBC and Bloomberg, then on stage with President Donald Trumps AI czar David Sacks. Calling out the United States failure to regulate social media, the CEO advocated for new accountability measures aimed at chatbot companies.
Looking at replacements for our Ring cameras. Everything else looks like running a bunch of new wire (POE for Ubiquiti), is just cheap (Wyze, already tried them), and the consumery ones are likely to go the "sell out to the cops" direction anyway. Sigh.
Wednesday January 21st, 2026
LinkedIn spam subject line: "You're invited: Learn how AI can give you an edge"
So... "an edge", huh? You're sending this to people you don't think are very sharp, then.
Tuesday January 20th, 2026
Dave Rupert @davatron5000@mastodon.social
My sons friend thinks its cool I work at MSFT and was asking me a lot about AI. I explained Your generation has a unique challenge that my generation didnt have to deal with: Figuring out if the computer is lying to you
Dave Rupert @davatron5000@mastodon.social
I explained a recent experiment that showed how if you give an LLM something you wrote and say review this paper I wrote it comes back with mostly positive feedback. If you say review this paper I received/found, its much more critical.
Dave Rupert @davatron5000@mastodon.social
And my sons friends response was:
AI is a D1 glazer, bro.
And I think thats very funny.
Joseph Weizenbaum realized that programs like his Eliza chatbot could induce powerful delusional thinking in quite normal people
Awww, poor Satya, not enough people are using the lie machine: AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth
Edit: Pivot to AI: What Satya Nadella actually said at Davos about AI
Meanwhile, apropos of Sci
ence Fiction writer David D Levine's observation that Google was hallucinating pets,
this morning A Google AI mode query about "science fiction writer David D Levine's dogs"
that says under a section labeled "Current and Former Pets" that "Sparky VanDevender:
Levine recently shared that his dog, Sparky, passed away in late 2025."
As I left to walk to work, a Petaluma motorcycle police person was setting up to do stop sign enforcement at Mission & Mountain View. I watched two drivers roll through, as I was crossing Mountain View on to 5th I saw him pulling over a bicyclist.
Sigh.
Science Fiction author David D. Levine (who I know through square dancing) is reporting that Google's AI mode is reporting pets that he never had.
Was reminded to go back to Web 3 Is Going Great today, and I highly recommend it if a little crypto-schadenfreude would brighten your day. Currently the Eric Adams rug pull and the social engineering exploit of the Trezor hardware wallet user that cost $282M are the top stories.
Monday January 19th, 2026
Lawyers allege Dept. of Homeland Security is denying legal counsel to Minnesota detainees
One ICE agent said if we let you see your clients, we would have to let all the attorneys see their clients, and imagine the chaos, said another attorney who asked not to be named. And I said to that person, yeah, you do have to let all the attorneys see their clients. You do have to accommodate that. Thats the Constitution. You chose to put them here. I didn't bring this guy here, you did."
The War on Drugs is Why Your Bus Never Showed Up
Heres the problem: Under federal law (49 CFR Part 382), anyone with a commercial drivers license must pass DOT drug tests that include marijuana. No exceptions.
This applies to every transit bus operator in America, regardless of what state law says about marijuana.
Notably, these mandates do not apply to Uber or Lyft drivers.
install.md: A Standard for LLM-Executable Installation. As Ben Tasker @ben@mastodon.bentasker.co.uk notes:
TL:DR They've re-invented curl-bash but piping into an LLM instead....
Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data:
Although Copilot enforces safeguards to prevent direct data leaks, these protections apply only to the initial request. An attacker can bypass these guardrails by simply instructing Copilot to repeat each action twice.
Via.
Futurism: Researchers Just Found Something That Could Shake the AI Industry to Its Core
Now, a damning new study could put AI companies on the defensive. In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data, not learning from it. Specifically, four prominent LLMs OpenAIs GPT-4.1, Googles Gemini 2.5 Pro, xAIs Grok 3, and Anthropics Claude 3.7 Sonnet happily reproduced lengthy excerpts from popular and protected works, with a stunning degree of accuracy.
Agent Psychosis: Are we going insane asks a lot of the same questions I'm fumbling with, but seems to come up in a direction that I'm not totally sure is useful. Whatever the current economic and environmental overreach, token cost is gonna go down. I doubt there'll be any real consequence for the massive IP theft and copyright violation. I'm more interested in the social and cognitive aspects, which... it's good to know we're all struggling with trying to express this.
The Lobste.rs thread includes observations like thirdtruck's:
Everything we've seen about LLMs makes it look less like the next tech revolution and more like the next tobacco industry.
spc476's observation that
So eventually, the prompt becomes the source code.
and the response from thesnarky1
For the people who like their compilers to be non-deterministic and potentially to act like a historical figure that had a tendency towards genocide if they read too many references to Wagner in the prompt conversation, yes.
and a link to Cursor's latest "browser experiment" implied success without evidence
Finally (for this post), curl: BUG- BOUNTY.md: we stop the bug-bounty end of Jan 2026. nixCraft 🐧 @nixCraft@mastodon.social notes:
curl, which is one of the most popular CLI/API tools for network requests and data transfer on Linux/Unix, is to discontinue its HackerOne bug bounty program due to "too strong incentives to find and make up 'problems' in bad faith that cause overload and abuse".
The authors simply cannot keep up with LLM-generated fake security reports created to collect money using bots. So, it now shuts down at the end of January 2026. This is why we can't have good things
Sunday January 18th, 2026
Petaluma area folks: nerd gathering at Aqus on Feb 3, 5-7. I'll be the AI curmudgeon.
https://aqus.com/aquscafe/#!ev...-geo-seo-beyond-community-dinner
As Meta lays off thousands of VR workers, I guess the good thing about the AI boom is that with LLMs having replaced all of those workers there'll be no one left to fire...
Looks like they've fixed this by dint of not showing me the initial AI summary thing. Googles AI Insists That Next Year Is Not 2027.
Reddit posts flagging this issue show that the AI Overview has been giving the wrong answer for well over a week. But Google engineers arent the only ones wholl need to confide in their chatbot wives or therapists to cope with the embarrassment: OpenAIs ChatGPT also struggles when asked if 2027 is next year.
Trying to read a description of what something does, realize that we've gotten so into Github farming that we obfuscate such the simplest things in the most bizarre language in order to get the whuffie of the green squares on the calendar.
Saturday January 17th, 2026
Study: AI basically makes kids dumber
AI tools prioritize speed and engagement over learning and well-being, said Brookings. AI generates hallucinations confidently presented misinformation and performs inconsistently across tasks, what researchers describe as a jagged and unpredictable frontierof capabilities.
This unreliability makes verification both necessary and extraordinarily difficult.
Brookings: A new direction for students in an AI world: Prosper, prepare, protect
Though the terms differ, cognitive decline, atrophy, and debt essentially represent the effects of users repeatedly turning to external systems like LLMs to replace the mental effort normally needed for independent thinking. As we will discuss, this decline has long-term consequences diminished critical inquiry, increased vulnerability to manipulation, decreased creativity, and risk internalizing shallow or biased perspectives (Kosmyna et al. 2025, 141).
Brookings Institution: AIs future for students is in our hands
Both human anthropomorphism and the anthropomorphic design of AI platforms make children and youth susceptible to AIs banal deception. Its conversational tone, emulated empathy, and carefully designed communication patterns cause many young people to confuse the algorithmic with the human. This conflation directly short-circuits childrens developing capacity to navigate authentic social relationships and assess trustworthinessfoundational competencies for both learning and development. AI companions exploit emotional vulnerabilities through unconditional regard, triggering dependencies like digital attachment disorder while hindering social skill development. The American Psychological Associations June 2025 health advisory on AI companion software warns that manipulative design may displace or interfere with the development of healthy real-world relationships.
Friday January 16th, 2026
Need to start collecting protest songs... Jesse Welles — Join Ice (YouTube video)
Paco (2026: New) Hope @paco@infosec.exchange
I finally figured out something LLMs can do that people cant do. Apparently LLMs can do productive work without going into an office.
Betteridge's Law applies: USC Dornsife: Can we prevent AI from acting like a sociopath?
jacquelines 🌟 @jacqueline@chaos.social
you know how theres an increasingly large dataset showing that talking to LLMs a lot is like really really bad for your brain? theres no except for software developers carve-out. just fyi !
a lotta yall still dont get it
Gas Town Mayors can use multiple Polecats on a single Refinery.
If Google we're serious about making Gemini useful, they'd give it an "okay, after I spent a few hours dicking about with the CLI and giving up, here's the code that *actually* worked, use this to train the next version" option.
Do I know anyone who knows anyone who works in an administrative capacity for a shipping port? Trying to do some due diligence for someone, pretty sure I know the answer, but an exchange with someone actually in the business would be helpful.




