LLMs are hacking us
2025-06-09 21:54:46.815667+02 by Dan Lyke 0 comments
Wilhelm Fitzpatrick @rafial@masto.hackers.townM>a<
Sitting here with gasping like a fish after reading a detailed description of using an AI coding tool to write a unit test for a simple function in which it is mentioned:
* The AI was apparently unable to "reason" about correct dependency injection or mocking
* 20 prompts were written to "iteratively refine" the solution
* 2000 lines of code were generated of which 50 were used...and at the end the writer concludes "the final result was successful and saved time over doing it manually"
😱
Wilhelm Fitzpatrick @rafial@masto.hackers.town
I continue to be astounded by how LLMs seem to hack the basic reasoning processes in our own meat brains!