Flutterby™! : LLMs are hacking us

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

LLMs are hacking us

2025-06-09 21:54:46.815667+02 by Dan Lyke 0 comments

Wilhelm Fitzpatrick @rafial@masto.hackers.townM>a<

Sitting here with gasping like a fish after reading a detailed description of using an AI coding tool to write a unit test for a simple function in which it is mentioned:

* The AI was apparently unable to "reason" about correct dependency injection or mocking

* 20 prompts were written to "iteratively refine" the solution

* 2000 lines of code were generated of which 50 were used

...and at the end the writer concludes "the final result was successful and saved time over doing it manually"

😱

Wilhelm Fitzpatrick @rafial@masto.hackers.town

I continue to be astounded[Wiki] by how LLMs seem to hack the basic reasoning processes in our own meat brains!

[ related topics: Software Engineering Writing Mathematics Artificial Intelligence Archival ]

comments in ascending chronological order (reverse):

Comment policy

We will not edit your comments. However, we may delete your comments, or cause them to be hidden behind another link, if we feel they detract from the conversation. Commercial plugs are fine, if they are relevant to the conversation, and if you don't try to pretend to be a consumer. Annoying endorsements will be deleted if you're lucky, if you're not a whole bunch of people smarter and more articulate than you will ridicule you, and we will leave such ridicule in place.


Flutterby™ is a trademark claimed by

Dan Lyke
for the web publications at www.flutterby.com and www.flutterby.net.