"AI" still sucks
2024-04-15 17:33:12.477876+02 by Dan Lyke 0 comments
Reading Stratechery: Gemini 1.5 and Google’s Nature, I'm struck by how many conditionals there are. It's about how Google's large context window for their LLM system could enable things like reading a vendor's TOS and checking for compliance with policy, or really pie in the sky stuff, and
Again, leave aside the implausibility of this demo: the key takeaway is the capabilities unlocked when the model is able to have all of the context around a problem while working; this is only possible with — and here the name is appropriate — a long context window, and that is ultimately enabled by Google’s infrastructure.
This reminds me a lot about the discussion around self-driving automobiles, where, leaving aside the issues of geometry and pollution and whatnot, is it possible that autonomous cars could be better drivers than humans? Sure. But every time someone digs through the deliberate obfuscation of the stats and looks at the numbers of what's happening right now, we're a long way the other side of that.
Or, of course, the discussion around cryptocurrencies.
Anyway, Futurism: Disillusioned Businesses Discovering That AI Kind of Sucks:
"'This is super cool, but I can't actually get it to work reliably enough to roll out to our customers.'"