MDN using an LLM to generate wrong answers
2023-06-30 22:48:35.297908+02 by
Dan Lyke
3 comments
MDN can now automatically lie to people seeking technical information #9208
SummaryMDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference.
If I wanted vaguely human sounding autogenerated content, I'd use StackExchange.
Mozilla: Introducing AI Help: Your Trusted Companion for Web Development. Yeah, so it's a paid promotion.
[ related topics:
Interactive Drama Invention and Design Artificial Intelligence
]
comments in ascending chronological order (reverse):
#Comment Re: MDN using an LLM to generate wrong answers made: 2023-07-02 13:31:56.293181+02 by:
brainopener
That's fun. API design can now be driven by what's least likely to cause LLM's trouble.
#Comment Re: MDN using an LLM to generate wrong answers made: 2023-07-03 18:02:35.244159+02 by:
Dan Lyke
Yeah, between that and the outsourced labor force that's training these things realizing that they can automate their own jobs, we're gonna see some serious inbreeding.
#Comment Re: MDN using an LLM to generate wrong answers made: 2023-07-03 18:52:38.082646+02 by:
markd
digital hemophilia
We will not edit your comments. However, we may delete your
comments, or cause them to be hidden behind another link, if we feel
they detract from the conversation. Commercial plugs are fine,
if they are relevant to the conversation, and if you don't
try to pretend to be a consumer. Annoying endorsements will be deleted
if you're lucky, if you're not a whole bunch of people smarter and
more articulate than you will ridicule you, and we will leave
such ridicule in place.