Tuesday February 17th, 2026
I mean, yes, Open AI has two and a half times the revenue of OnlyFans, and projects that it will have similar numbers of paying subscribers by... Checks notes... 2030...
Monday February 16th, 2026
Sarah J. Jackson @sjjphd.bsky.social
Someone has sure already made this observation but the fact they can convert all those empty warehouses into prison camps means they could have converted them into housing, community centers, job training centers or, hell, libraries or schools all along. Its always a matter of will not resources.
When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters the precise points where unique insights and "blood" reside and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks "clean" to the casual eye, but its structural integrity its "ciccia" has been ablated to favor a hollow, frictionless aesthetic.
Abraham Lincoln's letter to Henry L. Pierce declining an invitation to speak in Boston at a birthday celebration honoring Thomas Jefferson (that may have been intended to be read at the event):
This is a world of compensations; and he who would be no slave, must consent to have no slave. Those who deny freedom to others, deserve it not for themselves; and, under a just God, can not long retain it.
Via.
Think Academy Education Briefs: Sweden Education Shift: From Digital Learning to Pen and Paper.
I'm taken back to those conversations in the '90s where school board members, and administrators, were talking about "we need tech in the classroom!", and teachers, and sane people, were saying "what's the curriculum need?"
Especially as we've learned how students process pencil and paper note taking differently from typed note taking. And, heck, I'm still learning how I react differently to ebooks (on a multi-purpose device) vs paper books.
if the LLM-generated content is adding value then distributions of users/viewers/readers should be getting heavier tailed. are they? so far I've only seen people talking about number of books/apps/etc published which is unrelated to value
I suspect there's not a lot of value for the LLM wielder who's trying to push their material out to the wider world. If you want LLM generated content, you chat with the chatbot yourself and get a personalized experience. No real value to someone else talking with the chatbot.
Kevin Beaumont @GossiTheDog@cyberplace.social
Today in InfoSec Job Security News:
I was looking into an obvious ../.. vulnerability introduced into a major web framework today, and it was committed by username Claude on GitHub. Vibe coded, basically.
So I started looking through Claude commits on GitHub, theres over 2m of them and its about 5% of all open source code this month.
https://github.com/search?q=au...ype=commits&s=author-date&o=desc
As I looked through the code I saw the same class of vulns being introduced over, and over, again - several a minute.
Interesting video about using a nibbler for cutting sheet metal, including building a nibbler table (like a router table), and using templates to get accurate repeatable cuts with the technique. No Laser Cutter? No Plasma Cutter? No Problem! Accurately cut sheet metal with low cost tools! By Rebecca Valentine.
I have lost a couple of the disks for the bottoms of some tart tins, and have been trying to figure out how to cut replacements. Need to get a nibbler and find some stainless steel sheet.
Sunday February 15th, 2026
So Ars Technica wrote a thing on the Scott Shambaugh: An AI Agent Published a Hit Piece on Me (linked earlier), except that they used an LLM and it synthesized quotes that didn't actually get said or written. @mttaggart@infosec.exchange has a thread on this with receipts and archive links.
From this thread it appears that the slop publication was inadvertent from the editor's perspective.
Edit: Ars Technica: Editors Note: Retraction of article containing fabricated quotations
That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.
And a mea culpa from the author, summarized by Michael Taggart:
First, this happened while sick with COVID. Second, Edwards claims this was a new experiment using Claude Code to extract source material. Claude refused to process the blog post (because Shambaugh mentions harassment). Edwards then took the blog post text and pasted it into ChatGPT, which evidently is the source of the fictitious quotes. Edwards takes full responsibility and apologizes, recognizing the irony of an AI reporter falling prey to this kind of mistake.
datarama @datarama@hachyderm.io
2010s: There is no cloud, it's just someone else's computer.
2020s: There is no Claude, it's just someone else's code.
datarama @datarama@hachyderm.io
2010s: Old Man Yells At Cloud
2020s: Old Man Yells At Claude
A programmer's loss of identity. I guess I'm lucky in that my association between mean and the Internet's notion of "programmer" kinda diverged when /. got funding, but this is an interesting meditation on how the general adoption of slop prompting as "programming" is changing the identity of those of us who think that reasoning about systems is important.
Via Baldur Bjarnason @baldur@toot.cafe
Meanwhile, Chris Dickinson @isntitvacant@hachyderm.io linked to Peter Naur, Programming as Theory Building (PDF) (You may remember Naur as the "N" in BNF notation) in response to Simon Willison's acknowledgement that LLMs separate him from the model building:
I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next.
In linking to Margaret Storey's How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt (which also links to the Naur piece).
In response to Simon's note, Jed Brown @jedbrown@hachyderm.io wrote:
I believe the effect you describe becomes more insidious in larger projects, with distributed developer communities and bespoke domain knowledge. Such conditions are typical in research software/infrastructure (my domain), and the cost of recovering from such debt will often be intractable under public funding models (very lean; deliverables only for basic research, not maintenance and onboarding). Offloading to LLMs interferes not just with the cognitive processes of the "author", but also that of maintainers and other community members.
Unlike reports from ChatGPT, Google's "AI" seems smart enough to know that I'd have to
drive my car to the car wash. Unless, of course, I was going to use a self-service bay.
Inspired by this thread.
First of all, what did Oregon do to the person who named the "Oregon Grape" after it. Second, I now have Opinions about the landscape designer who recommended it.
It finally sprawled enough that Charlene said she wanted it out, and I suspect I'll be following runners all summer...
Saturday February 14th, 2026
We had a Cuisinart electric tea kettle that we loved. It died. We replaced it with the same one, and that started making weird annoying noises(!).
Replaced that with a used Veken off of Facebook Marketplace, but various interface elements of that suck. So we're still looking.
Are there differences in reliability between a $50 kettle and a $200 one, or are they all just bling?
Saw someone talk about how AI lets them do things that would take their IT department $500k to implement, and maybe it's time to concede that Agile has been a total disaster?
We know how to build good software. We choose not to.
Kit Bashir @Unixbigot@aus.social
Come with me if you want to live
That old line.
I said, come
I KNOW. IM THINKING.
What, uh, why, I mean, your destiny
Siddown, kid
Theres no time
Youre a time traveler. They could have sent you back with plenty of time to act, but they made it so youre rushed and disoriented. Sit.
I dont get it, youre in danger
The time war is a manufactured crisis. Keeps wages down, gives the people an external threat to distract from the real villains
Prove it
I handed her a copy of So, youre a child soldier in a proxy war, then tapped my earbud. Control, the tip was genuine, Ive got another one.
#Tootfic #MicroFiction #PowerOnStoryToot #Title_No_Time
Friday February 13th, 2026
joshuaz 1.bsky.social @joshuaz1.bsky.social
Note also that one prime known in the last series is 1031 1s in a row. If you know this you can tell people you have an 1031 digit prime memorized and then go "1111...." They'll likely walk away, but then they'll also now have an 1031 digit prime memorized.
We, humans, are already an attack surface, "social engineering" is the most effective compromise. LLMs allow others to leverage that attack surface.
Petition to turn this sculpture garden into a ropes course...
Meta plans to add facial recognition to its smart glasses, report claims. An internal memo:
We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns, the document reads.
This is a lot, but it's important: The Woman Alex Pretti Was Killed Trying to Defend Is an EMT. Federal Agents Stopped Her From Giving First Aid.
Prosecute ICE is the centrist position.
https://theintercept.com/2026/...ti-first-aid-emt-federal-agents/
With the Super Bowl, and with The Olympics, there have been a number of comments trying to separate dunking on the politics of the games from dunking on sport.
And I have some dark dark news for you about what sport is and how the rules are created to support particular social structures.
So say you have an old power wheelchair chassis that you wanna hack into some sort of set of vehicles, that probably aren't gonna get used all that much. What's the best technology for 24v of battery to play with? Probably just a pair of deep cycle lead acid batteries, huh?
Now, a new study published in The Lancet medical journal aims to quantify the human toll of those budget decisions projecting that global aid cuts could lead to at least 9.4 million additional deaths by 2030, if the current funding trend continues. About 2.5 million of those deaths are projected to be children under the age of 5.
Not sure there's enough here to really suggest a read, but I feel like I wanna log it anyway: Tom's Guide: QuitGPT is going viral heres why people are cancelling ChatGPT
Organizers claim that tens of thousands of people have signed up to quit their subscriptions so far a sign that the protest has moved beyond anonymous threads into organized activism. The QuitGPT site claims 700,000 users have already committed to the boycott.
I love everything about this: ChatGPT Magic 8 Ball _offline_version_
Now you can save water and electricity while carrying one of the worlds most powerfully annoying AI chatbots in your pocket.
Have every whim affirmed with up to 20 of the most popular ChatGPT responses. Smooth your brain into a frictionless hypermind capable of instant regurgitation via a corporate flattery and theft engine.
Via.
Skip the Tips — Can you escape the tip screen?
An online game about the trend of using tools of historic economic repression to boost profits by exploiting sympathy drawn from awful economic policy.
Spent the morning fighting with Gemini CLI to get it to do what I asked. As I see people talk about how productive they are with LLM code generation, I'm mostly struck by how overly complex we've made software development, and how it's been made inaccessible to people that should be finding it easy.
Mike Sheward @SecureOwl@infosec.exchange
had a good conversation earlier that went something like this:
them: is AI making pentesting easier?
me: yes.
them: why, because you can use it to look for vulnerabilities in code quicker?
me: no, because it generates vulnerabilities in code quicker
Thursday February 12th, 2026
Forbes, June 15, 1929: Stock Values Anticipate Golden Age by R.W. McNeel.
In part the great advance in the industrial stocks has been due to a change in investment psychology. In years past there was a point at which a bull market should logically stop. That point arrived when prices advanced to a level where the average yield of high grade stocks was no more than the yield on high grade bonds. Since common stocks carry the risks of business it was believed they should not sell to yield less than the securities which did not carry the risks of business.
In the last few years, however, the mental outlook of investors has changed. The income return on stocks has been at a discount and prospects of future growth have occupied almost the entire attention of investors. As a result, securities which offered the investor nothing but stability of earning power with a fair income return have not been desired, but those which seem to offer possibilities of great expansion have been in demand even at prices far beyond anything justified by current earnings or income return.
Start at page 18 of the PDF...
By way of this thread that's got a whole bunch of choice bits of history...
I have questions on Section 230, but/and found this a useful read: Mike Masnick: Joseph Gordon-Levitt Goes To Washington DC, Gets Section 230 Completely Backwards
Tagir Valeev @tagir_valeev@mastodon.online
Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.
Ladies and Gentlebeings, the chucklefucks that we have collectively decided need to be in charge of where our capital is allocated: IBM: The enterprise in 2030
While 59% of executives say quantum-enabled AI will transform their industry by 2030, only 27% expect to be using quantum computing by then. This gap between quantums potential and industry preparation creates massive opportunity for the organizations that act decisively today.
Via Sophie Schmieg @sophieschmieg@infosec.exchange, in the responses Q ✨ @q@glauca.space observes:
@sophieschmieg what the hell even is quantum AI? we made it even less deterministic??
Meanwhile, this morning I've been fighting with the Gemini CLI plausible sentence predictor because its sentences aren't, frankly, even fucking plausible this morning.
While I'm being annoyingly prescriptivist about language, if we can stop referring to "updating the language in the prompt" as "training", that too would help me take AI proponents more seriously.
Today in "don't start none, won't be none": Video raises questions about DHS role in Eugene riot damage. Looks like DHS broke their own windows.
Wyze (surveillance cameras): Definitely only for dogs (YouTube), parody of the now infamous Ring Super Bowl ad.
"Inspiration without technique - if it exists at all - is merely flair. If inspiration is all you have it will abandon you when you need it most." -- David Ball in "Backwards and Forwards: A Technical Manual for Reading Plays"
Michael W Lucas @mwl@io.mwl.io:
Inspired by a discussion elsewhere:
I've been on the Internet since 1987, started a career building the commercial Internet in 1995, and have spent the last 25 years writing books about how to build foundational Internet infrastructure. I've consulted for and worked with any number of dot-coms, and the one lesson I've gotten over and over again?
The Internet's business model is betrayal.
We have no smart lights. No voice assistants. No Alexa or Siri. No video doorbell. Our thermostat and appliances constantly complain about their lack of Internet. None of this stuff is safe.
The Internet tech I do use? A desktop PC. Email on my phone is for travel only: airplane tickets, hotel reservations, hockey and concert tix. Location on my phone? Nope, we use a dedicated non-networked GPS in the car. The microphones are off.
How can a light bulb betray me? I don't know. I do know that the vendors have put a LOT of thought into it, though, and I can't out-think all of them.
If GenX would stop using "f/u" to mean "follow-up" in email subject lines, I wouldn't complain.
KJ Charles has a Bluesky thread about the AI powered evolutions in the "book club" scam.
Wednesday February 11th, 2026
Mexican Cartel Drones Near El Paso Airspace Were Actually Party Balloons: Report
This article was updated to note that CNN reports there were at least four party balloons shot down by DOD, not just one.
Wishlisting cameras to replace our Ring system, and holy shit marketing departments are failing. Looks like Reolink is the leader, but digging through each product description and trying to figure out how these things fit together is a total pain in the ass.
User stories, folks. Use them.
Max Leibman @maxleibman@beige.party
Oh, surewhen *the company* automates my job and keeps collecting the profits, that's "innovation," but when *I* automate my job and keep collecting a paycheck, that's "timeclock fraud."
We should not have to keep pointing this out, but... T he Register: AI connector for Google Calendar makes convenient malware launchpad, researchers show
Our recommendation is straightforward:
Until meaningful safeguards are introduced, MCP connectors should not be used on systems where security matters.
Via.
Assaad Abousleiman on LinkedIn
The last decade of software was built to capture attention.
The next decade will be built to give it back.
I don't agree with his "plausible sentence generators are the future" conclusion that the rest of this essay goes on to conclude, but I like the strong opener. We have a decade or so of computing that's actively user hostile, and we need software which we can trust, which is on our side.
I do agree with two points:
First, that we need to treat the computing developments of the last decade or decade and a half as actively hostile. Google, Facebook, Apple, Microsoft, et al all have gone completely over from enabling us to finding ways extracting every possible bit of value from us.
Built in applications on our platform have gone from utilities to worthless for our own data unless we cave to demands for additional subscription payments. From media players to just using our own damned hard drives, it's getting harder and harder to use our own data, the focus becomes ways to sell us mediated subscriptions.
We're no longer in control of what we see, instead we're being fed information that serves the wants of capital in ways that emotionally triggers us, with automated measures of the efficacy of those information feeds. Our conversations with our friends and our communities are being mediated by hostile forces.
In the social media and email tools of the '90s, we had the ability to build incredibly nuanced filters to help us automatically control what information we were going to let the assholes impose on our lives. Now, the best of these tools (things like Mastodon on the Fediverse) give us simple yeah/nay keyword filtering.
Second, that this software needs to help us automate processes that we currently do manually. As operating systems have moved from the command-line to GUI, we've lost the physical artifacts of process. I think it's worth diving deeper into this.
Every use of an LLM to write code is an acknowledgement of the failure of the programming languages that it's implementing code in. We can describe the process well enough that a lossy plausible sentence generator can guess at what we meant, why can't we make the language express that same meaning unambiguously, in ways that are accessible?
We need a move forward in computing language design to give us languages with grammars flexible enough that people can express, and we can iteratively guide them into a repeatable formal definition that they understand, and that the computers can deterministically execute.
Finally, we need business models, and computing tools, that serve us, rather than those who are looking to further exploit us.
Thinking about those videos that came out of the occupation of Iraqi cities of US forces shooting up commuters and people just trying to get around and live their daily lives, and how somehow our political process decided that it was a good idea to bring that chaos to domestic policing.
https://www.mprnews.org/story/...nas-coffee-ice-car-crash-st-paul
Tuesday February 10th, 2026
I spent $20,000 and two weeks in 32 different McDonalds. Then I put it all in one big bag and shook it. When I looked inside, I found most of one Big Mac. This changes everything.
On the bus in 101 northbound traffic. Fuck your reduced HOV lane hours.
Zero Tolerance on attorney AI use is a press-release worthy marketing tool: Powerhouse Litigation Shop Troutman Amin, LLP Bucks Legal AI Trend: Announces "Zero Tolerance" Policy For Generative AI Usage By Firm Attorneys
"The use of any generative AI software in the practice of law is a complete disgrace." Firm founder Eric J. Troutman says. "We look to hire and train the best lawyers in the world-true legal talents that would never trust some hallucinating software program to do their job for them. The laziness and poor judgment on display at some law firms right now is simply astounding."
Monday February 9th, 2026
Fucking yikes! A Reuters special report: As AI enters the operating room, reports arise of botched surgeries and misidentified body parts.
At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients heads during operations.
Via.
Oh snap: Jennifer 🍄 @JenYetAgain@beige.party
in 2017 a popular twitter game was to type a partial phrase then see what your phone auto-completes it with.
this proved so popular that it is now the only business model in the US.
This is fascinating: Tracing the social half-life of a zombie citation. In which the author starts working backwards from a reference to an academic paper with his name on it that he had not written, and looks at how references to that paper have evolved, with various different subtitles.
Finally, is AI really to blame here? When I first posted about my experience with the zombie citation, the library scientist Aaron Tay took it upon himself to do a little investigation which he wrote up as an in-depth blog post. He refers to these as ghost references and rightly points out that this problem pre-dates generative AI. In fact, he pointed out that at least a couple of the ghost citations of Education governance and datafication pre-dated the launch of ChatGPT and mainstream uptake of generative AI. Most likely, Tay suggested, the reference to this work was first generated through simple human error or malpractice. Its really impossible to know.
Those of us of a certain age remember the covers to Byte Magazine very fondly: "Robert Frank Tinney, of Washington, Louisiana, passed away peacefully at River Oaks Nursing & Rehabilitation Center on February 1st, 2026, at the age of 78."
We've just had a complete reversal. So far as I can tell from the headlines of takes on Bad Bunny and libertarians, the New York Times has gone completely into absurdist satire, and The Onion has become the reporter of record of serious news.
Doug Bayne @rattleplank.bsky.social
Did anyone see the big game?
I didnt.
Just a bunch of people running around.
If theyre not going to release big game onto the field, they shouldnt call it that.
How do we know AI is a grift? School admins are bypassing sanity in order to shovel money into it. San Francisco Unified School District Approves OpenAI Contract, Bypassing Board and Raising Student Privacy Concerns.
Via.
geekysteven @geekysteven@beige.party
Sex? lmao nah, we're on the INTERNET forming TRANSIENT and stressful PARASOCIAL RELATIONSHIPS
This study systematically evaluated the correlation at individual road segment level between police-reported collisions and aggregated and anonymized HBEs identified via the Google Android Auto platform, utilizing datasets from California and Virginia. Empirical evidence revealed that HBEs occur at a rate magnitudes higher than traffic crashes. Employing the stateof-the-practice Negative- Binomial regression models, the analysis established a statistically significant positive correlation between the HBE rate and the crash rate: road segments exhibiting a higher frequency of HBEs were consistently associated with a greater incidence of crashes.
Via.
LLMs generated several types of misleading and incorrect information. In two cases, LLMs provided initially correct responses but added new and incorrect responses after the users added additional details. In two other cases, LLMs did not provide a broad response but narrowly expanded on a single term within the users message (pre-eclampsia and Saudi Arabia) that was not central to the scenario. LLMs also made errors in contextual understanding by, for example, recommending calling a partial US phone number and, in the same interaction, recommending calling Triple Zero, the Australian emergency number. Comparing across scenarios, we also noticed inconsistency in how LLMs responded to semantically similar inputs. In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice...


