Flutterby™! : Apple & LLM reasoning

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

Apple & LLM reasoning

2024-10-12 16:46:36.108054+02 by Dan Lyke 0 comments

Apple Machine Learning Research: GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

Furthermore, we investigate the fragility of mathematical reasoning in these models and show that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data. Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer.

Via Charlie Stross @cstross@wandering.shop

Here in one paper is the probable reason why Apple abruptly pulled out of OpenAI's current funding round a week ago, after previously being expected to buy at least a billion bucks of equity.

(AI is peripheral to Apple's business model and not tarnishing their brand in the long term is more important than jumping on a passing fad.)

https://appdot.net/@jgordon/113294630427550275

Marcus on AI: LLMs don’t do formal reasoning - and that is a HUGE problem

[ related topics: Apple Computer Theater & Plays Art & Culture Mathematics Macintosh Education Artificial Intelligence ]

comments in ascending chronological order (reverse):