Flutterby™! : MDN using an LLM to generate wrong answers

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

MDN using an LLM to generate wrong answers

2023-06-30 22:48:35.297908+02 by Dan Lyke 3 comments

MDN can now automatically lie to people seeking technical information #9208

SummaryMDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference.

If I wanted vaguely human sounding autogenerated content, I'd use StackExchange.

Mozilla: Introducing AI Help: Your Trusted Companion for Web Development. Yeah, so it's a paid promotion.

[ related topics: Interactive Drama Invention and Design Artificial Intelligence ]

comments in ascending chronological order (reverse):

#Comment Re: MDN using an LLM to generate wrong answers made: 2023-07-02 13:31:56.293181+02 by: brainopener

That's fun. API design can now be driven by what's least likely to cause LLM's trouble.

#Comment Re: MDN using an LLM to generate wrong answers made: 2023-07-03 18:02:35.244159+02 by: Dan Lyke

Yeah, between that and the outsourced labor force that's training these things realizing that they can automate their own jobs, we're gonna see some serious inbreeding.

#Comment Re: MDN using an LLM to generate wrong answers made: 2023-07-03 18:52:38.082646+02 by: markd

digital hemophilia