Flutterby™! (short)

Flutterby™! (short)

Saturday November 22nd, 2025

Rohnert Parking Dan Lyke / comment 0

Yesterday, Charlene sent me an article headlined A new downtown in four years? Rohnert Park approves plan to bring ‘missing heart’ to city. The article had a bunch of interesting quotes, including this direct challenge to Petaluma's resistance to the Charlie Palmer faced hotel:

“Premier lodging is extremely challenging in the North Bay and the entire wine region,” he said. “A premier experience would put Rohnert Park on the map. People will end their wine tasting journey, come back, park their car and spend the rest of the evening in downtown Rohnert Park.”

Which, I mean, I wanna give some side-eye to the "hey, let's build a tourist industry on people driving around while consuming alcohol" attitude towards drunk driving, and wonder about further encouraging the "upscale recreational drug use" destination marketing, but respect the "Petaluma, you're on notice!" pro wrestling vibe.

I didn't look closely at the pictures. Today on Reddit there's this gem, which is best summarized as "Rohnert Parking".

It's a shame we can't see further than "let's stack a story or 2 of residential on an '80s mall".

Friday November 21st, 2025

Windows is evolving into an agentic OS Dan Lyke / comment 0

PC Mag: Microsoft Exec Asks: Why Aren't More People Impressed With AI?

Mustafa Suleyman, Microsoft's head of AI, vents after the company receives backlash for saying 'Windows is evolving into an agentic OS.'

Books of the moment Dan Lyke / comment 0

A few recent watches and reads:

Light from Uncommon Stars by Ryka Aoki — Wonderful cozy book about aliens and demons battling over the soul of a trans runaway, with bonus culture clash between modern and classical music. Hit me hard in the first few chapters. Didn't quite stick the landing, but I really enjoyed the ride.

The Starving Saints by Caitlin Starling — I ended up reading through it, but... there's a certain sort of cruelty in an illogical world that just doesn't carry me. I ... kinda ... connected with the characters, but the universe wasn't something I could map cause and effect to, and the world was so cruel, that the last time I remember feeling this way about a book was China Miéville's Perdido Street Station. It just never clicked for me.

Gotta read the output Dan Lyke / comment 0

Dude offers a patch for OCaml, the source code credits and ascribes copyright to someone else, dude claims that he shepherded the LLMs Claude and ChatGPT into creating the patch. So, yeah, blame the copyright infringement on "AI"...

It's a shame that he didn't do this in a place where there were real legal consequences.

LLMs aren't as good for learning as actual reading Dan Lyke / comment 0

Expected outcome...

Gizmodo: Learning With AI Falls Short Compared to Old-Fashioned Web Search

In virtually all the ways that matter, getting summarized information from AI models was less educational than doing the work of search.

Science News: Chatbots may make learning feel easy — but it’s superficial

PNAS Nexus: Experimental evidence of the effects of large language models versus web search on depth of learning Shiri Melumad, Jin Ho Yun

Abstract

The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. A theory is proposed that when individuals learn about a topic from LLM syntheses, they risk developing shallower knowledge than when they learn through standard web search, even when the core facts in the results are the same. This shallower knowledge accrues from an inherent feature of LLMs—the presentation of results as summaries of vast arrays of information rather than individual search links— which inhibits users from actively discovering and synthesizing information sources themselves, as in traditional web search. Thus, when subsequently forming advice on the topic based on their search, those who learn from LLM syntheses (vs. traditional web links) feel less invested in forming their advice, and, more importantly, create advice that is sparser, less original, and ultimately less likely to be adopted by recipients. Results from seven online and laboratory experiments (n = 10,462) lend support for these predictions, and confirm, for example, that participants reported developing shallower knowledge from LLM summaries even when the results were augmented by real-time web links. Implications of the findings for recent research on the benefits and risks of LLMs, as well as limitations of the work, are discussed.

https://doi.org/10.1093/pnasnexus/pgaf316

China buys CIA insurance provider Dan Lyke / comment 0

Just clearing my bookmarked social media pages: A Chinese firm bought an insurer for CIA agents - part of Beijing's trillion dollar spending spree. So, yeah, you wanna know who's working for the intelligence agencies? Why not just get their health records...

Via/

Deepfakes being presented as evidence Dan Lyke / comment 0

AI-generated evidence is showing up in court. Judges say they're not ready.

The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the first instances in which a suspected deepfake was submitted as purportedly authentic evidence in court and detected — a sign, judges and legal experts said, of a much larger threat.

AI in exposing the flaws in "education" Dan Lyke / comment 0

Good discussion of what it means to be teaching, and learning, in the age of AI: Will Teague — I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.

I got that via this observation by Sean Purcell (he/him) @teamseaslug@hcommons.social

@jnl There's part of this essay that I hadn't thought about before, which is the ways college education punishes failure.

I've been one to lean on the argument that our students prize the degree and not the education. (That is what they are paying for in a lot of cases, and the universities are much more about saying what you can do with the degree and not how you'll, hopefully grow.)

But the other side is that the degree mill is built on tracking successes (through classes) and failing an assignment, a test, an entire course is HUGE. (Thousands of dollars, scholarships, admission into the school.) AI is a shortcut, but also sells itself as a way to avoid those potential failures.

We say in our classes, in our educational theory, in our anecdotes outside school, that we learn through failure, but any time I did a project that 'failed' I got my GPA dinged, and that impacted all of the other avenues that I had available to me.


Flutterby&tm;! is a trademark claimed by
Dan Lyke
for the web publications at www.flutterby.com and www.flutterby.net. Last modified: Thu Mar 15 12:48:17 PST 2001