Have a project that might be good for Rust. People like libcosmic. Fix the issues with
dependencies and architecture, run:
cargo generate gh:pop-os/cosmic-app-template
Get to where it asks for a "Repository URL", and:
⚠️ Sorry,
"ssh://danlyke@www.flutterby.com/home/danlyke/var/git/squareplay2"
is not a valid value for repository-url
Does everything have to suck? Can't anything just work?
I don't remember seeing "international inc" as a superlative to that particular exclamation....
A MeFi thread about restoring scroll position, and then looking at the Lit JavaScript library has me wondering: Is there a web MVC framework that doesn't rely on breaking the user experience by writing the web page from code?
https://ask.metafilter.com/389...ll-position-not-a-solved-problem
Abandoned ill-fated attempts to prompt Gemini CLI to rewrite the CSS, made a few fixes by
just learning the technology in the first place. Hopefully that broke less stuff this time
around.
If there's one thing that attempting to use the Gemini CLI to code with has taught me, it's that there's really no substitute for learning the technology yourself and doing it right in the first place.
Tire dust as a major automobile pollutant has been mentioned too many times previously to
link 'em all (well, okay, a previously smattering:
1, 2, 3, 4), KUOW: Every tire produces a chemical that kills coho
salmon. Can scientists pump the brakes? adds the twist that a University of
British Columbia study suggests that artificial turf fields use waste tire infill as a
cushion. They propose additional filtering for new sports fields, but wonder what can be
done to treat existing ones.
https://yesify.net
Enterprise-grade affirmations powered by cutting-edge agreement technology.
Stop thinking. Start agreeing.
Linking to Psych Safety: The Vasa
disaster, about the decision-making processes involved in the building of the Swedish
warship that sank after sailing a kilometer and a half in 1628 (and which now has a museum). It's a good read, but MeFi user Aardvark Cheeselog
"That's a terrible idea, Your Majesty," said no shipwright to a king, ever.
which, not a novel idea, but it also made me think "said no DOT to a citizenry asking for
more lanes, ever", etc.
I wrote a kid's homework for them over on Reddit
My complaints center mostly around LLMs, with a slight diversion into generative AI for
music and images:
My first complaint is just the quality of the output. I keep having... you know, the kinds
of friends who DM you random whackadoodle Substack articles, only now they're DMing acres
and acres of LLM generated slop and saying "this is so insightful" and it isn't. It's
mediocre writing that often doesn't actually make sense. Really, when you use an LLM to
generate prose it's doing the metaphorical equivalent of seven fingered humans, you're just
not smart enough to see it.
The second complaint is the outsourcing of thinking. I mean, sure, you can make the argument
that these things are analogous to calculators and you don't actually need to do arithmetic,
but a lot of what I'm seeing is that people have stopped critically reading the output
altogether. Or, if they're coding, they're losing the mental model of the code they're
writing. Turning out stuff that appears to work, sure, but they're quickly dropping into
delusions about what the LLM can and can't know, and they have no mental model for the code
that's actually being generated.
Which, you know, is fine if you don't actually care how things work, but understanding how
things work is how we figure out new and novel and interesting ways to use technologies, and
that's not coming out of LLMs.
The third is how that ties into the anthropormophization of these things. The literature
refers to this as "epistemia", but I see a lot of thinking that the LLM is thinking, and
because of the "slot machine" payoff nature of these things that may be often enough to
actually be really compelling, but then they use it for something where they get a
grievously wrong answer, and the crater is pretty big. And because of well known issues of
attention and operator fatigue, there's really no good way to outsource the kind of
attention that's necessary to get good output from these things to humans. Use of them will
bite you.
(Cue all of the cocky kids saying "skill issue". Dude, if that skill issue could be solved,
C would be a safe programming language. Fuck all the way off with that argument.)
Then we get into the ethics of how these things are trained.
The theft of content. I don't even get that cranky about the huge percentage of traffic
that's hitting my web servers from AI vendors and making it harder to have personal sites,
the use of pirated materials, and remixing of intellectual properties in ways that
individual humans would never get away with feels like a different set of rules. Anthropic
and OpenAI pirated how many books? And they're getting a slap on the wrist, after huge
efforts.
I'm old enough to remember when the record industry went after Napster users. If there were
justice applied equally... well...
The power use, from local pollution to climate change to just electricity prices. If there
were some sort of good coming out of it, sure, but, as pointed out up-thread, the LLMs are
overhyped stupidity (every claim for success from these things has been a lie stemming from
overtraining on test data or randomness), and the images are just stupid. Sure, they now
mostly get the right number of fingers, but we're gonna burn down the planet for those
aesthetics. Eeewww.
I'm the showdown between the Catholic Church and the current administration I can't believe I'm siding with... I mean... Holy shit, if you'd asked me pre this administration to name an evil institution responsible for so much suffering and abuse...
Glyph
@glyph@mastodon.social
heres the AI regulation that I want: if anyone proposing utility for an AI
tool utters the words I could imagine
, a big cartoony boxing glove on a spring needs to
pop out of a box and punch them through a wall