Premier lodging is extremely challenging in the North Bay and the entire wine
region, he said. A premier experience would put Rohnert Park on the map. People will end
their wine tasting journey, come back, park their car and spend the rest of the evening in
downtown Rohnert Park.
Which, I mean, I wanna give some side-eye to the "hey, let's build a tourist industry on
people driving around while consuming alcohol" attitude towards drunk driving, and wonder
about further encouraging the "upscale recreational drug use" destination marketing, but
respect the "Petaluma, you're on notice!" pro wrestling vibe.
I didn't look closely at the pictures. Today on
Reddit there's this gem, which is best summarized as "Rohnert Parking".
It's a shame we can't see further than "let's stack a story or 2 of residential on an '80s
mall".
Friday November 21st, 2025
Windows is evolving into an agentic OS
Dan Lyke /
comment 0
Light from Uncommon Stars by Ryka Aoki — Wonderful cozy book about
aliens and demons battling over the soul of a trans runaway, with bonus culture clash
between modern and classical music. Hit me hard in the first few chapters. Didn't quite
stick the landing, but I really enjoyed the ride.
The Starving Saints by Caitlin Starling — I ended up reading through it,
but... there's a certain sort of cruelty in an illogical world that just doesn't carry me.
I ... kinda ... connected with the characters, but the universe wasn't something I could
map cause and effect to, and the world was so cruel, that the last time I remember feeling
this way about a book was China Miéville's Perdido Street Station. It just
never clicked for me.
Dude offers a patch for OCaml, the
source code credits and ascribes copyright to someone else, dude claims that he shepherded
the LLMs Claude and ChatGPT into creating the patch. So, yeah, blame the copyright
infringement on "AI"...
It's a shame that he didn't do this in a place where there were real legal consequences.
LLMs aren't as good for learning as actual reading
Dan Lyke /
comment 0
The effects of using large language models (LLMs) versus traditional web
search on depth of learning are explored. A theory is proposed that when individuals learn
about a topic from LLM syntheses, they risk developing shallower knowledge than when they
learn through standard web search, even when the core facts in the results are the same.
This shallower knowledge accrues from an inherent feature of LLMsthe presentation of
results as summaries of vast arrays of information rather than individual search links
which inhibits users from actively discovering and synthesizing information sources
themselves, as in traditional web search. Thus, when subsequently forming advice on the
topic based on their search, those who learn from LLM syntheses (vs. traditional web
links) feel less invested in forming their advice, and, more importantly, create advice
that is sparser, less original, and ultimately less likely to be adopted by recipients.
Results from seven online and laboratory experiments (n = 10,462) lend support for these
predictions, and confirm, for example, that participants reported developing shallower
knowledge from LLM summaries even when the results were augmented by real-time web links.
Implications of the findings for recent research on the benefits and risks of LLMs, as
well as limitations of the work, are discussed.
The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the
first
instances in which a suspected deepfake was submitted as purportedly authentic evidence
in court and detected a sign, judges and legal experts said, of a much larger threat.
AI in exposing the flaws in "education"
Dan Lyke /
comment 0
@jnl There's part of this essay that I hadn't thought about before, which is
the ways college education punishes failure.
I've been one to lean on the argument that our students prize the degree and
not the education. (That is what they are paying for in a lot of cases, and the
universities are much more about saying what you can do with the degree and not how
you'll, hopefully grow.)
But the other side is that the degree mill is built on tracking successes
(through classes) and failing an assignment, a test, an entire course is HUGE. (Thousands
of dollars, scholarships, admission into the school.) AI is a shortcut, but also sells
itself as a way to avoid those potential failures.
We say in our classes, in our educational theory, in our anecdotes outside
school, that we learn through failure, but any time I did a project that 'failed' I got my
GPA dinged, and that impacted all of the other avenues that I had available to me.