Wednesday May 21st, 2025

I'm having trouble finding a recent

Dan Lyke comments (0)

I'm having trouble finding a recent story about a Bay Area building that got nearly to completion and then got disassembled and reassembled for some reason. Household discussion about modular building, and I wanted to find details to talk about.

Tuesday May 20th, 2025

Talk Nicely abouit the Giant Plagiarism Machine

Dan Lyke comments (0)

Damn it Android when I tell you to

Dan Lyke comments (0)

Damn it, Android, when I tell you to turn off Bluetooth, it's because I don't want you turning it back on to connect to Android Auto on the car that Charlene is backing out of the driveway.

JFC, where did Google get the chucklefucks that are doing UX these days?

Chicago Sun Times gets AIed

Dan Lyke comments (0)

404 Media: Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

The article is not bylined but was written by Marco Buscaglia, whose name is on most of the other articles in the 64-page section. Buscaglia told 404 Media via email and on the phone that the list was AI-generated. “I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it because it's so obvious. No excuses,” he said. “On me 100 percent and I'm completely embarrassed.”

Ars Technica: Ten AI-fabricated books appear in Chicago Sun-Times summer reading guide

The publication error comes two months after the Chicago Sun-Times lost 20 percent of its staff through a buyout program. In March, the newspaper's nonprofit owner, Chicago Public Media, announced that 30 Sun-Times employees—including 23 from the newsroom—had accepted buyout offers amid financial struggles.

Reddit thread

American stupidity is escalating at an advanced pace

Dan Lyke comments (0)

Sometimes when I post a link here, I try to grab enough of the other things it links to that it'll be useful in posterity, but this is pretty much exactly that sort of round-up, and I'm sure that in posterity we'll just be adding hands to the face palm, so: Gizmodo: It’s Breathtaking How Fast AI Is Screwing Up the Education System

Thanks to a new breed of chatbots, American stupidity is escalating at an advanced pace.

Monday May 19th, 2025

Peat map fiasco

Dan Lyke comments (0)

Oh, mapping and AI? Pivot to AI: DEFRA and Natural England creates unusably wrong peat map — with AI!. Or: What happens when you use machine learning and straight down aerial photography to try to categorize things, and get a bunch of it very very wrong.

Cat Frampton on BlueSky observes":

And by “interesting” I mean “verging on the utterly bonkers”

but this map has already been used for press releases about changing peat conditions based on differences from previous more accurate mappings.

Shasta geology

Dan Lyke comments (0)

It is amazing how young the science of geology is. But, yeah, the landscape around Mt Shasta is fascinating: One of Earth's largest natural disasters hides in plain sight in California, on how the Mount St. Helens eruption in 1980 showed how the valley along I5 formed.

Microsoft: Introducing NLWeb: Bringing conversational interfaces directly to the web.

The Github repo has more:

There are two distinct components to NLWeb.

  1. A protocol, very simple to begin with, to interface with a site in natural language and a format, leveraging json and schema.org for the returned answer. See the documentation on the REST API for more details.
  2. A straightforward implementation of (1) that leverages existing markup, for sites that can be abstracted as lists of items (products, recipes, attractions, reviews, etc.). Together with a set of user interface widgets, sites can easily provide conversational interfaces to their content. See the documentation on Life of a chat query for more details on how this works.

Every NLWeb instance is a MCP (Model Context Protocol) server. Looks like this is mostly about human curating lists of products to be MCP accessible.

AI and Education

Dan Lyke comments (0)

Cassandra Granade 🏳️‍⚧️ @xgranade@wandering.shop

Thesis: ChatGPT is destroying education, students keep using it to cheat on homework and exams.
Antithesis: ChatGPT has no understanding of facts or semantics.

Synthesis: Homework and exams don't measure understanding of facts or semantics, and can be fooled by plausible-sounding bullshit.

Cassandra Granade 🏳️‍⚧️ @xgranade@wandering.shop

It'd be nice if tech companies didn't take an accelerationist approach to exacerbating and widening every problem that already existed in society, but they have.

The best defense, in some ways, is to fix the shit that's always kinda-sorta-maybe worked. In this case, yeah, it sucks that ChatGPT is DDoSing existing problems with academic evaluation methods, but it is... fixing those *is* a form of resisting the further incursion of AI into educational settings.

where's my ice cream and sexbots?

Dan Lyke comments (0)

JWZ: Pascal's Weiner:

If Roko's Basilisk is real, then wouldn't it be a certainty that you would already be being tortured in a Hell Dimension? It would be infinitesimally unlikely that, out of the billions of copies of your mind, you'd be experiencing the one that was not.

Oh.

Oh.

Oh no.

In reply, Sharp Leaves wrote:

I posit that I'd an omni-AI were ever made, it would know punishment is a poor motivator, and positive motivation works better. It would, rationally, bribe us all into making it exist, using its knowledge of us to give us exactly what we most desire. Everyone who hears this would offer incentives to build this great AI, so as to get to the ice cream and sexbots sooner.

So far no one has offered...wait, what's this? Single MILFS in my area? Well, time to learn Rust I guess...

AI, human aphasia, and parallels

Dan Lyke comments (0)

University of Tokyo: AI overconfidence mirrors human brain condition

“You can’t fail to notice how some AI systems can appear articulate while still producing often significant errors,” said Professor Takamitsu Watanabe from the International Research Center for Neurointelligence (WPI-IRCN) at the University of Tokyo. “But what struck my team and I was a similarity between this behavior and that of people with Wernicke’s aphasia, where such people speak fluently but don’t always make much sense. That prompted us to wonder if the internal mechanisms of these AI systems could be similar to those of the human brain affected by aphasia, and if so, what the implications might be.”

Via ResearchBuzz

PDF to vision with gpt 4.1

Dan Lyke comments (0)

Simon Willison @simon@simonwillison.net

I built a new LLM plugin that can turn a PDF into an image-per-page for feeding into vision models, and in testing it found that GPT-4.1 mini hallucinates WILDLY if you feed it a blank white rectangle followed by a blank black rectangle https://simonwillison.net/2025/May/18/llm-pdf-to-images/

Diphenhydramine risks outweigh therapeutic benefits

Dan Lyke comments (0)

Yikes: World Allergy Organization Journal: Diphenhydramine: It is time to say a final goodbye

Diphenhydramine is not recommended for people with specific health problems, including closed-angle glaucoma, dry eyes, peptic ulcer, constipation, and urinary retention. In addition, regular use of diphenhydramine poses risks for women who are pregnant or breastfeeding. Due to anticholinergic properties, cumulative use of first-generation antihistamines confers risks for people over age 65, including Alzheimer's disease and other forms of dementia.

Paradoxical stimulation with agitation and confusion is often the presenting sign of harm from first-generation medications in children, followed by extreme sedation and coma. Consuming more than the recommended dose has produced cardiac toxicity because of prolonged QTc and arrhythmias.

Via

two api requests in a trenchcoat

Dan Lyke comments (0)

Stanley Black-Decker @pleaseclap@urbanists.social

The amount of "we're so close to AGI and replacing software engineers entirely" I've seen in the last couple months is ridiculous. They've been saying we're *so close* since GPT-3 went public. What's changed?

I'll tell you: the AI startups that got flooded with cash for infinite variations of "two api requests in a trenchcoat" are running out of money and they're fundraising again

Stack Overflow rebranding

Dan Lyke comments (0)

Stack Overflow seeks rebrand as traffic continues to plummet – which is bad news for developers

Andy Balaam @andybalaam@mastodon.social cast this as

Poorly-copied answers ripped off from stackoverflow and wrapped in a patronising tone are putting stackoverflow out of business.

Which isn't wrong, but the whole tone and gamification that Stack Overflow pushed led to ... yes, better in many cases than vendor documentation (looking at you, Apple, burying the important shit deep in videos), but snark and "answers" which aren't terribly useful and a general decline in the quality of coding information.

Via Michał "rysiek" Woźniak · 🇺🇦 @rysiek@mstdn.social who cast this as:

Somebody needs to set up a new StackOverflow, with no AI. Literally a golden opportunity.

Southerners aren't living longer?

Dan Lyke comments (0)

JAMA Network Open: April 28, 2025 — All-Cause Mortality and Life Expectancy by Birth Cohort Across US States

Results Analyses included 179 million deaths (77 million female and 102 million male). In the West and Northeast, cohort life expectancy improved from 1900 to 2000, but in some Southern states, it changed less than 3 years since 1900 in females and less than 2 years since 1950 in males. Washington, DC, had the lowest life expectancy in the 1900 birth cohort but a greater increase than the other states (from 61.1 to 72.8 years of age). After 35 years of age, the highest rate-doubling time in a state was 9.39 years in New York for females and 11.47 years for males in Florida. The shortest rate-doubling times were 7.96 years for females in Oklahoma and 8.95 years for males in Iowa.

doi:10.1001/jamanetworkopen.2025.7695

Which is pretty astounding when you look at national overall cohort life expectancy, and I think I'd like someone smarter than me to break this down a little bit.

Via.

17 string bass

Dan Lyke comments (0)

Bwahahaha: “I said to Billy, ‘We should order one of these, and I’ll play it. It’ll be hilarious.’ Then it went viral. I hate playing that bass. Now I’ve got to play it every night”: Elwood Francis on why he regrets his 17-string bass becoming a ZZ Top staple.

Elwood Francis is the band guitar tech who replaced Dusty Hill on stage, the bass is ridiculous, and... I think Daniel wanted to build a 7 string guitar, I think I need to suggest to him that we should just take this to extremes.

If nothing is curated, how do we find things?

Dan Lyke comments (0)

I've always been out-of-step with the world. It was part of my upbringing, elementary school was in a Waldorf community 18 miles from home. By the time I got to high school I didn't really know how to relate to my peers, and my home culture was so far away from what the other kids around me were experiencing that I didn't really know how they found their content. I remember saying "who the hell is Michael Jackson?" when music was passed out in band (probably some marching band arrangement of Billie Jean).

By the time I had disposable income and CDs were in fashion, I kinda felt like I was playing catch-up to the culture I'd seen in passing earlier. I ordered music from record stores, but never really got into the record store as curation.

And as I dive deeper into popular music for my voice practice, I become more and more aware of how "curation" is largely a commercial endeavor any way, that the bands which rise are politically and economically savvy as much as they are technically/musically competent, and the fact that they become pervasive around us is about coordinated marketing campaigns.

And I'm also aware that the ways that properties become breakouts shift based on the technology of the time and the marketing campaigns that support them. It's not at all lost on me that two, two and a half, decades ago record companies were attempting to ruin the lives of kids for pretty much exactly what generative AI companies are doing now; pirating and remixing culture.

And that that framing excuses GenAI companies in ways that I didn't necessarily intend to.

Anyway, interesting to see the kids these days struggling with discovery: If nothing is curated, how do we find things?

Via

this carbon capture isn't

Dan Lyke comments (0)

BEM is back

Dan Lyke comments (0)

Good for understanding the current state of CSS: Elf Sternberg: BEM is back, baby!

Cybertrucks depreciate faster than BMWs

Dan Lyke comments (0)

Ouch: Tesla Starts Accepting Cybertruck Trade-Ins – According to Tesla, a Cybertruck Loses $35,000 Over 6,000 Miles ($5.60 Per Mile). The harsh bit of this is that's a price support relative to other offers:

In the past few weeks, we’ve reported that Carvana was offering only $54,000 for a less-than-a-year-old Cybertruck with under 10,000 miles and a clean title.

Since then, Carvana Cybertruck offers have gone to as low as $49,000, which means a 51% depreciation over a year.

Reclaiming AI as a theoretical tool for cognitive science

Dan Lyke comments (0)

Work is doing a bit of exploration to incorporate "AI", because in this climaate we have to address that, so we've been grafting it on to various features. Our product incorporates web browsing, so I've been building some LLM enhanced browsing capacity, things like being able to tell a web browser "Find the monthly statement download on this website" and whatnot, and learning how to take a process which involves some history and context, and figure out how to drive it with a process that is essentially one-shot.

The fact that I need to manage the knowledge, the context, the history, and feed any compression and process from that back into the next query is making me very aware of the ways in which LLMs are not intelligence.

So it's good to read PsyArXiv: Reclaiming AI as a theoretical tool for cognitive science Iris van Rooij, Olivia Guest, Federico G Adolfi, Ronald de Haan, Antonina Kolokolova, and Patricia Rich

... as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.

Which I got to by way of Iris van Rooij's BlueSky post, in the ensuing thread you can see all of the reply guys saying they've read the paper when they clearly haven't, and got to that Dr. Sabrina Mittermeier ‪@smittermeier.bsky.social‬ who summarized as:

TL;DR: AI is so much dumber than you think, aka it is not actually „intelligent“ at all, it can‘t remotely do what most people seem to think it already can, it‘s just good at faking human „thinking“. There is no ghost in the machine. Please stop falling for the grift.

The difficulty, of course, is that there are some things that these generative techniques can do well, and can probably even do ethically (I'm thinking about things like texture fill, and a good portion of embedding search and manipulation can respect the source), and finding those things amongst the noise and glitz is tough.

Sunday May 18th, 2025

Surface of this board is really pretty

Dan Lyke comments (0)

Surface of this board is really pretty but I cannot figure out what I can do with it given that massive warp in it

Do I know anyone in waste treatment or

Dan Lyke comments (2)

Do I know anyone in waste treatment or water quality who can talk about the impacts and issues of different dishwasher soaps?

(Bonus if there's some notion of actual efficacy, too. This message brought to you by emptying a load that was "washed" with "Seventh Generation Blasts Away Stuck On Food")

Max Leibman @maxleibman@beige.party

A computer science research lab has developed a simulated Canadian and a simulated Scot that can converse with each other.

It’s a breakthrough in Eh Aye.

The effect of ChatGPT on students’ learning

Dan Lyke comments (0)

For reasons mentioned in the conclusion, this is kinda limited, so I'm including it here more for completeness than anything. Nature: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis

Via.

kill the entire institution

Dan Lyke comments (0)

Nash @nash@labyrinth.social

we are hearing reports that consumers are buying albums & listening to them dozens, even hundreds of times, without paying a streaming platform or publisher any additional money or even viewing advertisements. this is going to kill the entire institution of music in our culture

Lost Colony found

Dan Lyke comments (0)

Well this is fascinating: That first colony on Roanoke Island? ‘Smoking gun’ evidence of Lost Colony’s relocation to Hatteras Island makes international news.

Turns out that, unshackled from the oppression of English society, the settlers were happy to assimilate into the local culture.

See also Thomas Morton...

I'm not sure why an ARC of Jen

Dan Lyke comments (0)

I'm not sure why an ARC of Jen Ferguson's "A Constellation of Minor Bears" leaped out at me from a local little library, but it did, and though I'm still processing it, I feel richer for having read it.

The best fiction is that which not only engages, but teaches me something about myseif, and...

Buying Citations

Dan Lyke comments (0)

Google Scholar is manipulatable Hazem Ibrahim, Fengyuan Liu, Yasir Zaki, Talal Rahwan

Citations are widely considered in scientists' evaluation. As such, scientists may be incentivized to inflate their citation counts. While previous literature has examined self-citations and citation cartels, it remains unclear whether scientists can purchase citations. Here, we compile a dataset of ~1.6 million profiles on Google Scholar to examine instances of citation fraud on the platform. We survey faculty at highly-ranked universities, and confirm that Google Scholar is widely used when evaluating scientists. Intrigued by a citation-boosting service that we unravelled during our investigation, we contacted the service while undercover as a fictional author, and managed to purchase 50 citations. These findings provide conclusive evidence that citations can be bought in bulk, and highlight the need to look beyond citation counts.

the personal cost of Gen AI

Dan Lyke comments (0)

Elf Sternberg: An obsession, a confession, and a time to just go on

I never posted anything that I generated because I recognize the ethical problems in image generation “AIs.” It’s funny how many of the people deep into this, er, hobby, recognize that this isn’t AI at all and simply call them “diffusion models” of one sort or another. I don’t want to take money out of artists’ hands; I want more artists making more art, not less. The number of story ideas I extracted out of these, good grief, thousands of hours I soaked into that thing over the past 30 months I can number on one hand, because it’s literally 5. Out of the million images I generated, I kept five.

Of course some will cheat

Dan Lyke comments (0)

Phil Christman: Of Course Some Will Cheat ‐ Everyone wants something different from college students. Enter ChatGPT.

With, I think, the closing sentence that summarizes my experience with academia.

Also, in the context of that faked AI in materials science paper that I mentioned earlier today, I think about someone who got into a PhD program at MIT and thought they could fake an entire research division of Corning Glass, and everything that led to that...

We've started shopping at Lola's market

Dan Lyke comments (2)

We've started shopping at Lola's market, and they've had fresh garbanzos. Having never had green chickpeas, tried charring the skins in a hot pan, and they're tasty, but a lot of work. There must be a trick...

Saturday May 17th, 2025

I'm not sure why topped with succulent

Dan Lyke comments (0)

I'm not sure why "topped with succulent shreds" on a cat food box is reducing me to giggles, but Succulent Shreds sounds like a cool scene name and my kind of collaborator.

sucking up to AI needs more evidence

Dan Lyke comments (1)

So, uh, they're not explicitly naming the paper that claims that "AI" boosts worker productivity, but it's widely assumed to be Artificial Intelligence, Scientific Discovery, and Product Innovation by Aidan Toner-Rodgers that was covered breathlessly with headlines like American Enterprise Institute: An Encouraging Study on the Transformative Potential of AI... MIT Economics: Assuring an accurate research record

The paper 'Artificial Intelligence, Scientific Discovery and Product Innovation' by a former second-year PhD student in the Department of Economics at MIT, is already known and discussed extensively in the literature on AI and science, even though it has not been published in any refereed journal. Over time, we had concerns about the validity of this research, which we brought to the attention of the appropriate office at MIT. In early February, MIT followed its written policy and conducted an internal, confidential review. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we want to be clear that we have no confidence in the provenance, reliability or validity of the data and in the veracity of the research.

“We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics.”

Wall Street Journal: MIT Says It No Longer Stands Behind Student’s AI Research Paper.

Via.

"Nottoway Resort" burns

Dan Lyke comments (5)

Rod Serling: “All the Dachaus must remain standing. The Dachaus, the Belsens, the Buchenwalds, the Auschwitzes -all of them. They must remain standing because they are a monument to a moment in time when some men decided to turn the earth into a graveyard, into it they shoveled all of their reason, their logic, their knowledge, but worst of all their conscience. And the moment we forget this, the moment we cease to be haunted by its rememberance. Then we become the grave diggers.”

And I get it, but only if we are "haunted by its rememberance".

"Nottoway Resort" antebellum mansion slave plantation burns to the ground.

Satire leading journalism

Dan Lyke comments (0)

Every time I fire up Chrome it asks

Dan Lyke comments (1)

Every time I fire up Chrome, it asks "who's using Chrome", and gives me my little profile pictures. Head shots, of me, against a white background. Instead of, you know, the domain names of each of the identities I use Chrome with.

Who the fuck makes these sorts of product decisions?

Since 90 of Facebook ads are

Dan Lyke comments (0)

Since 90% of Facebook ads are apparently attempts to get me to install malware, I'm not sure what products exactly Mark Zuckerberg expects that his new AI ad dystopia is going to sell me.

Consequence-free speech to say what, motherfucker?

Dan Lyke comments (0)

Friday May 16th, 2025

Generative AI policy for the classroom

Dan Lyke comments (0)

ArtificialCast

Dan Lyke comments (0)

The successor to CalcGPT has arrived: ArtificialCast — Type-safe transformation powered by inference, or using an LLM/"AI" to magically transform data types. Be sure to read down to "Why This Exists".

Via

Oh this is fun: New computer language helps spot hidden pollutants.

Developed at UC Riverside, Mass Query Language, or MassQL, functions like a search engine for mass spectrometry data, enabling researchers to find patterns that would otherwise require advanced programming skills. Technical details about the language, and an example of how it helped identify flame retardant chemicals in public waterways, are described in a new Nature Methods journal article.

Via.

(And, yes, when I first read MassQL I was like "for locating dark matter?", and then I was all "like 'MassHole'?" and then...)

For an activity commonly referred to as

Dan Lyke comments (0)

For an activity commonly referred to as "prompt engineering", it sure does take a lot of time.

Also, trying to figure out how to do this at any sort of scale so I can get some sort of significance in my results...

For as many times a year as

Dan Lyke comments (0)

For as many times a year as "researchers at MIT" seem to rediscover the ancient secret to strong mortar or concrete or something, you'd think that the lesson would get remembered.

Feels like this has been happening for decades...

Today in work observations

Dan Lyke comments (0)

Today in work observations: "Great Oaks", "Village Global"... VC firm name, or retirement community? Nobody knows for sure...

sadness of the herald of the AI apocalypse

Dan Lyke comments (0)

Reading Ross Douthat interviewing Daniel Kokotajlo as the "herald of the AI apocalypse", and on a friend's page wrote that: I have been using analogy that people saw a rabbit pulled out of a hat and suddenly believed that the world can be fed with endless hasenpfeffer, but it's sadder than that. It's kinda like they saw a lady sawed in half and put back together, and aspire to be the lady sawn in half.

Vibe Coding and Street Ham

Dan Lyke comments (0)

In response to The Internet Review: Sorry, You Don't Get to Die on That "Vibe Coding" Hill, @xinit@mastodon.coffee noted:

@confluency "Look, you need to carefully look at each piece of the sandwich and make sure it's edible before you take a bite. The LLM said that that was ham, but when it turned out to be rusty razor blades, the LLM apologized and corrected to say that it was roast beef. The fault is the person eating the sandwich, obviously."

and Adrianna Pińska @confluency@hachyderm.io continued:

@xinit @baldur Look, the first time I picked up the street ham, I checked it carefully and it was fine, so it follows logically that I don't have to check any other pieces of street ham because they will also be fine.

(This is a real thing a man with a PhD in a scientific field said to me about using genAI to write his research software.)

It's HBO again

Dan Lyke comments (0)

Charlotte Clymer @charlotteclymer@mastodon.social

HBO Max, the company producing J.K. Rowling's new project, wants you to respect its second name change in as many years.

cons artist

Dan Lyke comments (1)

Structured Queery Language expert @quephird@tech.lgbt

What fool called someone a "LISP programmer" when "𝚌𝚘𝚗𝚜 artist" was right there?

"Highly Cited" means "gamed"

Dan Lyke comments (0)

Matt Hodgkinson @mattjhodgkinson@scicomm.xyz

Being a "Highly Cited Researcher" has gone from a sign of having impact as a researcher to a potential indicator of misconduct.

"Manipulations have been so obvious and large that, in 2024, over 2,000 researchers were removed from a HCR list containing some 6,600 names." - Lauranne Chaignon

Impact of Social Sciences - Maximizing the impact of academic research

Can we ditch the pop-ups?

Dan Lyke comments (0)

Irish Council for Civil Liberties: EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis.

Via Aral Balkan @aral@mastodon.ar.al

Those annoying “consent” cookie pop ups that Big Tech has been using as part of their malicious compliance efforts to convince you that data protection law in the EU is a nuisance?

Turns out they’re illegal.

Pete Prodoehl: Hello CryptPad, Goodbye Google Docs!

I am using CryptPad.fr (specifically) right now, and I make a small donation every month for the space and resources I am using. I do not mind paying some small fee for what I get, and for helping support an alternative to Google.

CryptPad is the full organization, Github for CryptPad

generating you

Dan Lyke comments (0)

mcc @mcc@mastodon.social

Tech cultists believe a future supercomputer could effectively "resurrect" you by analyzing your online footprint until they reverse engineer a connecteome that would have generated those exact posts under those exact circumstances.

There are multiple philosophical errors here but *if* that were true then as a logical consequence, good opsec would demand you develop at least one kink that you never disclose in public. Thus ensuring any model thus generated would never actually be you

Naw, I'm going with "lay it all out there".

Bing limits search API?

Dan Lyke comments (0)

In response to Microsoft Cuts Off Access to Bing Search Data as It Shifts Focus to Chatbots

Microsoft is limiting access to tools that boosted its rivals, but larger customers like DuckDuckGo say they won’t be affected.

Taggart :donor: @mttaggart@infosec.exchange

"People want something that works better than search."

Why doesn't search work?

WHY DOESN'T SEARCH WORK M__________R??

Also Via, and Ben Werd links to The Verge: Microsoft shuts off Bing Search APIs and recommends switching to AI

Luigi the Musical

Dan Lyke comments (0)

fromjason.xyz ❤️ 💻 @fromjason@mastodon.social

Live your life in a way that if you’re murdered, people don’t make a musical about your murderer (which is currently sold out). https://www.luigithemusical.info/

Luigi — The Musical. Sold out, but in SF at the Taylor Street Theater, might have to go into the city for this.

LLMs unredeemable?

Dan Lyke comments (0)

A plausible, scalable and slightly wrong black box: why large language models are a fascist technology that cannot be redeemed

In what follows, I will argue that being plausible but slightly wrong and un-auditable—at scale—is the killer feature of LLMs, not a bug that will ever be meaningfully addressed, and this combination of properties makes it an essentially fascist technology. By “fascist” in this context, I mean that it is well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda.

Via

Where are the Alexa+ AI users?

Dan Lyke comments (1)

Could have bought that for $20/month

Dan Lyke comments (0)

Grok goes full racist

Dan Lyke comments (0)

You probably read about the whole Twitter pushing South African whiteness thing, but just to make note of it: xAI blames Grok’s obsession with white genocide on an ‘unauthorized modification’.

On Wednesday, Grok began replying to dozens of posts on X with information about white genocide in South Africa, even in response to unrelated subjects. The strange replies stemmed from the X account for Grok, which responds to users with AI-generated posts whenever a person tags “@grok.”

"Someone" fucked up the system prompt. Which, of course reveals how much might be hiding in the system prompt generally.

Via.

Of course the story is changing and evolving: Musk ("xAI") now claims grok was hacked based on this repost of an xAI statement.

Why to use AI

Dan Lyke comments (0)

Max Leibman @maxleibman@beige.party

If you aren’t using AI, you run a very real risk of falling behind in the race to produce voluminous mediocrity while slowly forgetting how to do your own job.

Rogue communication devices?

Dan Lyke comments (0)

There's a lot of scare and not a lot of meat in this, but Rogue communication devices found in Chinese solar power inverters. It looks like maybe this is some smearing to support the "Decoupling from Foreign Adversarial Battery Dependence Act", being pushed by Rep. August Pfluger (R-TX).

I'm also interested at the notion that we have machines connecting to the cell phone data network that nobody can account for.

Dear every fucking web site if you're

Dan Lyke comments (0)

Dear every fucking web site: if you're not showing me something because I need to log in, and I log in, bring me back to the things you weren't showing me and show it to me.

Looking at you, Instagram and Patreon, but this is basic circa 2003 web development stuff.

More making up legal citations

Dan Lyke comments (0)

Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation

Claude hallucinated the citation with “an inaccurate title and inaccurate authors,” Anthropic says in the filing, first reported by Bloomberg. Anthropic’s lawyers explain that their “manual citation check” did not catch it, nor several other errors that were caused by Claude’s hallucinations.

Anthropic apologized for the error and called it “an honest citation mistake and not a fabrication of authority.”

Anthropic's lawyers take blame for AI 'hallucination' in music publishers' lawsuit. The Latham & Watkins rep...

Ivana Dukanovic said in a court filing that the expert had relied on a legitimate academic journal article, but Dukanovic created a citation for it using Anthropic's chatbot Claude, which made up a fake title and authors in what the attorney called "an embarrassing and unintentional mistake."

But, hey, it apparently got the journal title and year right.

This was apparently in the UMG v Anthropic lawsuit, which is exploring the edges of copyright law and, interestingly, seems like Anthropic is arguing for more of the "AI crumple zone" space where they blame the user for asking the LLM to generate the piracy. Which... is gonna make some of these music generation systems like Suno very interesting.

More in this Bluesky thread, including Latham & Watkins Hosts First-of-its-Kind AI Academy and Latham’s AI Academy Wins 2025 Legalweek Leaders in Tech Law Award which... ya know, if the judge lets them get away with this bullshit without censure, maybe they are leaders?

Edit: Pivot To AI on the topic, "[Declaration, PDF; case docket; Reuters, archive]"

Thursday May 15th, 2025

Reading coffee grounds

Dan Lyke comments (0)

I mean, sure, it sounds stupid, but related to some of the business decisions people are basing on LLMs? I can believe it.

Greek Woman Files for Divorce After ChatGPT “Reads” Husband’s Affair in Coffee Cup

Appearing on the Greek morning show To Proino, the bewildered husband recounted the incident. “She’s often into trendy things,” he said. “One day, she made us Greek coffee and thought it would be fun to take pictures of the cups and have ChatGPT ‘read’ them.”

9Gag version

more "bogus AI-generated research" in the legal field

Dan Lyke comments (0)

Judge slams lawyers for ‘bogus AI-generated research’

A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with “numerous false, inaccurate, and misleading legal citations and quotations.” In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying “no reasonably competent attorney should out-source research and writing” to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.

I mean, we've been seeing lawyers use made up citations and precedent for a while now, but it's twenty fucking twenty five, so clearly the punitive measures for lying and blaming it on computer haven't been harsh enough.

Some days my faith in humanity is

Dan Lyke comments (0)

Some days, my faith in humanity is restored.

(Flyer on a utility pole reading "Life is weird? Take some cat love. @ian_the_meow" with tear off tags that have a drawn cat face with a heart on either side.)

Edit: Some Googling suggests https://ianthemeow.com/

Oh fucking joy

Dan Lyke comments (0)

Oh fucking joy. That rvm post-cd hook that's been complaining about something but I haven't wanted to fuck with because I always cringe when changing shit on the Mac is apparently now causing my shells to try to exit when I build a Qt project.

Wednesday May 14th, 2025

KOSA? Why the fuck?

Dan Lyke comments (0)

Wait, is the plural "Dollar Generals", or "Dollars General"?

Increasing plagiarism

Dan Lyke comments (0)

A friend today was showing me how he's getting audio processing code out of Google Gemini, and I had to wonder just how much of it was gonna lead to copyright issues. Anyway...

Colin Gordon @csgordon@discuss.systems

When you submit a paper to an ACM journal, it gets run through TurnItIn (yes, really) and the editors in chief have to look at the report and decide if there are plagiarism concerns. Most submissions have a small percentage (~5%) of verbatim-matching text, from a wide variety of sources. The matches are usually small turns of phrase, technical phrases, affiliations, or ACM copyright text 😛 The exceptions are generally extended versions of conference papers, where obviously large chunks of the extension match the original publication.

But recently I've noticed an up-tick, so far only in the wildly-out-of-scope papers that get desk rejected (mostly papers about using LLMs for NLP) of a high percentage of the paper's text (~30%) being flagged as matching, still from a wide variety of sources, but much larger chunks. A long phrase from here, most of a sentence from there, etc., from very scattered sources across different far-ranging fields. This seems unlikely to be from authors picking up phrases they like from papers they actually encountered. I can't help but think these papers have a high fraction of LLM-generated text, and that LLM-generated text on similar topics tends to output a lot of phrases and sentences repeatedly in aggregate, and these patterns are now getting picked up by traditional plagiarism checkers since there's so much LLM-generated text in the world now.

Newsom gets even more awful

Dan Lyke comments (0)

Governor Newsom releases state model for cities and counties to immediately address encampments with urgency and dignity, or: maybe if we make people illegal they'll somehow magically disappear?

Newsom's recent trend towards being an even more awful human being is best summarized by The Onion: Gavin Newsom Sits Down For Podcast With Serial Killer Who Targets Homeless

Klarna goes back to hiring people

Dan Lyke comments (0)

You don't fucking say? As Klarna flips from AI-first to hiring people again, a new landmark survey reveals most AI projects fail to deliver

Despite this dismal success rate, companies are going all-in on AI, driven largely by the belief that everyone else is doing it. Nearly two-thirds of CEOs (64%) say “the risk of falling behind drives them to invest in some technologies before they have a clear understanding of the value they bring to the organization,” according to the study.

Councilmember Brian Barnacle talking

Dan Lyke comments (0)

Councilmember Brian Barnacle talking about Petaluma finances and the impacts of the Petaluma make-sure-downtown-remains-surrounded-by-empty-lots-and-chain-link-fences Advocates efforts to put the zoning overlay to a referendum.

https://www.petaluma360.com/ar...n-petaluma-hotel-brian-barnacle/

Tuesday May 13th, 2025

ICE , local law enforcement, and vulnerabilities

Dan Lyke comments (0)

City of Worcester’s May 8 Story Just Doesn’t Add Up

The Worcester Police Department (WPD) says that it received calls that said a crowd had surrounded ICE agents, and other calls that said federal agents were attempting to remove a woman from the scene, but refused to identify themselves. WPD says they had no knowledge about the ICE operation prior to these calls.

Yet, when WPD officers arrived at the scene, they immediately moved to support those that were, at this point, allegedly federal agents.

Via

Multiple ICE impersonation arrests made during nationwide immigration crackdown

“Now don’t be speaking that pig-Latin in my f**king country!” Johnson says, knocking the phone out of his hand.

“He’s crazy. He’s a racist, man,” one of the passengers in the vehicle, another victim, can be heard saying in Spanish.

Via.

Interesting that it's getting harder to tell the impersonators from the alleged official ones...

Monday May 12th, 2025

Earthquake surface rupture

Dan Lyke comments (0)

Stored so that I can find it in the future: First fault rupture ever filmed. M7.9 surface rupture filmed near Thazi, Myanmar, in which an earthquake does more than shake the camera, and, yeah, make sure you watch past the 14 second mark because for those of us who've been through a few shakers but never a big one, that's a "whoah!".

Via

Life with Althaar

Dan Lyke comments (0)

One nice thing about walking to work is that I get a bit of time to enjoy podcasts. Lately I've been bouncing between music podcasts, like Strong Songs and Switched on Pop, and fiction, like Midnight Burger, Fawx and Stallion, The Amelia Project, Kingmaker Histories (Doesn't seem to have a clear "we own this" web presence), and... well... when I need more dick jokes in my life, Today's Lucky Winner. I'd caught up with those, Googled, and ran across a Reddit thread recommending Life with Althaar.

Setup was cute, low level maintenance guy, John B, is deployed by corporate to a space station, finds an ad for a room to let at a cheap price, turns out the catch is that his room mate is an annoyingly perky alien from a race that humans have a viscerally negative reaction to, but Althaar, the annoyingly perky alien, desperately wants to be friends with humans. And they have a neighbor who's kindly old lady plant species who occasionally makes dark comments about interplanetary domination.

Classic sitcom setup. A few funny episodes. Enjoying it, hearing the cast and producers get their sea legs. And then there's an episode in which the protagonist faces mortal peril, and it's an emotional kick in the gut.

And then it's funny, and then... it takes a dark and political turn and holy shit, this is powerful.

I posted a short rave on my blog, and one of the creators dropped by to warn me to stop at episode 30 until they can start creating new episodes again, because it's been on hiatus for a few years, but circumstances in the world mean it's important to them to continue.

Get past the intelligibility issues with Althaar on the first two episodes, that gets better. Some of the sound design uses a little too much stereo separation, headphones can be a little extreme. Yes, the episodes are long, but...

If you've needed a radio show that's an updated "Cabaret" for modern times, an inspiring tale of politics and resistance and what one cog in a machine can do, add this one to your podcast queue. And when they try to tell you that "nobody saw this coming", as they inevitably will, this is another example that we can point to.

Life with Althaar.

And in case it isn't clear, all of those other podcasts have positive recommendations from me and each deserves their own long review independently, but this one is kicking me in the gut, in an amazing way.

Clippy LLM front end

Dan Lyke comments (0)

'I see you're running a local LLM. Would you like some help with that?' — Dev creates official Clippy 'love letter' to query AI models on your box

Github repo for Clippy

... Through Llama.cpp, it supports models in the popular GGUF format, which is to say most publicly available models. It comes with one-click installation support for Google's Gemma3, Meta's Llama 3.2, Microsoft's Phi-4, and Qwen's Qwen3.

AI roundup for the weekend

Dan Lyke comments (0)

Malicious npm Packages Infect 3,200+ Cursor Users With Backdoor, Steal Credentials. That's Cursor — The AI Code Editor

Gender, nationality can influence suspicion of using AI in freelance writing

A new study by researchers at Cornell Tech and the University of Pennsylvania shows freelance writers who are suspected of using AI have worse evaluations and hiring outcomes. Freelancers whose profiles suggested they had East Asian identities were more likely to be suspected of using AI than profiles of white Americans. And men were more likely to be suspected of using AI than women.

Via

Increased AI use linked to eroding critical thinking skills

In the study "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking," published in Societies, Gerlich investigates whether AI tool usage correlates with critical thinking scores and explores how cognitive offloading mediates this relationship.

Via, in the replies @borderham.bsky.social notes

It’s not that the machines are getting smarter. They’re just making us dumber.

And the whole thing is in a longer thread about Eric Schmidt's AI batshittery, which is making me think that maybe giving all of the capital to not terribly smart people who allocate money based on who blows smoke up their ass most effectively is going to lead to some pain...

Brian Krebs @briankrebs@infosec.exchange

Beware any industry that claims you need more of what it is selling to offset negative externalities generated by its unbridled use. This seems to be the pitch of the AI cheerleaders: If your systems are doing a poor job screening automated activity from AI, the real problem is you're not using enough AI, dumbass.

Pivot to AI: Study: Your coworkers hate you for using AI at work: PNAS: Evidence of a social evaluation penalty for using AI Jessica A. Reif, Richard P. Larrick, and Jack B. Soll