Flutterby™! : LLMs unredeemable?

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

LLMs unredeemable?

2025-05-16 17:08:05.228164+02 by Dan Lyke 0 comments

A plausible, scalable and slightly wrong black box: why large language models are a fascist technology that cannot be redeemed

In what follows, I will argue that being plausible but slightly wrong and un-auditable—at scale—is the killer feature of LLMs, not a bug that will ever be meaningfully addressed, and this combination of properties makes it an essentially fascist technology. By “fascist” in this context, I mean that it is well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda.

Via

[ related topics: Heinlein Artificial Intelligence Race ]

comments in ascending chronological order (reverse):