Flutterby™! : "AI" rant of the morning

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

"AI" rant of the morning

2024-07-16 19:00:34.6401+02 by Dan Lyke 0 comments

Copying my response to an AskMeFi "Why don't we already have AI powered voice assistants?" question here:

Charitably? A lot of people in the VC and tech communities are unable to distinguish bullshit language generation from intelligence. The LLMs are remarkably good at generating language that sounds plausible, and even sounds plausible in the context of the text that you prime them with, but if you look at what Google's "AI" results are giving you, it's rarely even in the ballpark of correct.

The whole reason that Humane AI and Rabbit had to ship what were essentially cut down phones with their product was that when you run a scam, you need to have enough different moving parts that people can't tie them all together. Yes, the assistant which reliably did what they said their assistant did, just through your existing phone, would totally be a useful product that people would pay for, but if you don't have those other moving parts as a part of your scam, then people start to look at the individual items more closely and realize what's going on.

The bullshit generation is getting "better", as in "more plausible more often", but there's no indication that the technology can get good enough to do a lot of what's getting claimed for it without a lot better feedback loop in terms of verification from the user (witness the problems with Android Auto, where it can say "sure, I can navigate you to...", and then navigate you to some place miles away that's plausibly what you asked for, because it didn't have a clarifying pass).

Given who's done the training of these systems, and what they're currently able to pose as, the question to ask about AI applications is: Would this communications process be enhanced by the insertion of an insecure Nigerian teenager with the tendency to make shit up rather than admit that they don't know? And, yes, there are totally applications where that might be useful (if you don't have coworkers you can talk out a problem with, for instance), and there are attempts at bolting on augmentation for answerable questions when the pattern can be identified, but until there's a solid breakthrough on building a knowledge model that's more than just language probabilities, this is just a bunch of people who've been educated to confuse language generation with smarts pushing their career bets on you.

[ related topics: Children and growing up Interactive Drama moron Machinery Community Artificial Intelligence ]

comments in descending chronological order (reverse):