LLMs without the randomness
2025-05-24 16:41:47.222102+02 by Dan Lyke 0 comments
Personally I think LLMs will find another peak of genuine usefulness but only after people give up the chatbot UI and the AI framing. LLMs are lossy knowledge reproduction for knowledge that has been encoded into language. The LLM purveyors introduced randomness into their chat outputs essentially for demo purposes. They are now sort of victims of their own success because introducing uses without the randomness lifts the AI veil. I think that randomness, combined with GPU cost and accessibility issues, obscures a lot of other uses. E.g. knowledge mapping tools that could perform multiple deterministic passes that are each seeded by other another algorithm that is measuring the output against other, less lossy data sources.