Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Apple Technology

Apple Study Reveals Critical Flaws in AI's Logical Reasoning Abilities 47

Apple's AI research team has uncovered significant weaknesses in the reasoning abilities of large language models, according to a newly published study. MacRumors: The study, published on arXiv [PDF], outlines Apple's evaluation of a range of leading language models, including those from OpenAI, Meta, and other prominent developers, to determine how well these models could handle mathematical reasoning tasks. The findings reveal that even slight changes in the phrasing of questions can cause major discrepancies in model performance that can undermine their reliability in scenarios requiring logical consistency.

Apple draws attention to a persistent problem in language models: their reliance on pattern matching rather than genuine logical reasoning. In several tests, the researchers demonstrated that adding irrelevant information to a question -- details that should not affect the mathematical outcome -- can lead to vastly different answers from the models.

Apple Study Reveals Critical Flaws in AI's Logical Reasoning Abilities

Comments Filter:
  • Uh - duh? (Score:2, Redundant)

    by peterww ( 6558522 )

    AI does not reason. It predicts word ordering. Reasoning requires knowledge bases with semantic knowledge and analysis. Word ordering just puts jumbles of symbols in order.

    • An LLM is: what if I gave my smartphone keyboard autocomplete unlimited resources and trained it on everything ever written by anyone? Just like, money is no issue, give it all the processing power and memory and data... what could it do?

      Turns out, a lot. But, it is still fundamentally limited by the whole starting point of building the best auto-complete.
      • Re: (Score:1, Troll)

        by Moryath ( 553296 )

        "Trained" is itself the wrong terminology. "Training" implies learning, which implies intelligence. LLMs are a giant statistical-probability database with an impressive depth of connection between each individual tokenized node, but nowhere in there does any actual intelligence or reasoning ability exist.

        The whole term "artificial intelligence" is the problem. It, and the use of terms like "training," lead people to anthropomorphize what they shouldn't.

        • Bit of an old man yelling at clouds here. Programming relies on a lot of metaphors to help us understand the purpose of things.

          I do not think semaphores are using little colored flags to control my threads,
          which I do not believe to be strings bound on spools to divide my jobs,
          which I do not believe to be gainful employment on the part of my code.

          And:
          Objects are not things I can hold.
          Models are not toy planes
          Servers don't bring you your food
          Links are not part of a chain
          Calling functions does not require a p

        • The whole term "artificial intelligence" is the problem

          It's a term that never really had any practical meaning other than a program that responds to inputs. In the 80s and 90s, AI was your chess opponent, which basically did fancy heuristics with a static ruleset. It never was intelligent, and still isn't. When most companies describe their product as AI, it's not even LLM, it's just a variation of the ol' chess opponent.

          Though I'd have to slightly disagree about your training comment. For LLM, yes, it's not training so much as just adding data points for the d

        • by Ksevio ( 865461 )

          "Training" is an accurate and correct term to use here. Not only is it the common terminology in the machine learning field for decades, but it describes what is happening.

          LLM models aren't just databases, they're weighed neural networks that will produce a given result based on a given input. The training is to adjust the weights to properly produce the result. Without the training, the model produces gibberish.

        • by jythie ( 914043 )
          One of the issues is that what has become known as AI, well, isn't. The stats people adopted it when they found the cool factor brought more attention and VC funding.
  • by JBMcB ( 73720 )

    There is no reasoning. It's pattern matching based on keywords and weights feeding into Markov chains. Most LLMs also have some inferencing ability hardwired in there by humans, but they don't make those inferences on their own.

    • Re:Reason (Score:5, Insightful)

      by Baron_Yam ( 643147 ) on Tuesday October 15, 2024 @03:32PM (#64866767)

      The funny thing is... Somehow our ability to reason is an emergent property of weighted connections in a network. Because we don't understand how that happens, we don't know why it isn't happening with the AI we have created, or if it's even possible with the setups we're using. We also don't know if it's impossible for a sufficiently complex version of an existing AI system to do it.

      Probably impossible, I suspect there's more than just 'embiggen it and it will happen'.

      • A traditional neural network isn't the best approximation of actual neurons, so we don't get something that works in quite the same way. The hardware we run these programs on isn't like an actual brain either. However, when we do create software that actually models a physical brain, it does behave like one. There have been some studies conducted to recreate a worm brain in software as it has a smaller number of neurons and can be easily mapped out since dissecting worms isn't going to raise many eyebrows.
      • If I gave you a 5 gallon bucket and a 2 gallon bucket, how many buckets did I give you?

      • by jythie ( 914043 )
        Implemented on and emergency property of are not necessarily the same thing.
    • by narcc ( 412956 )

      There is no reasoning.

      Correct.

      It's pattern matching based on keywords and weights feeding into Markov chains.

      Incorrect. LLMs are non-Markovian.

  • by gweihir ( 88907 ) on Tuesday October 15, 2024 @03:26PM (#64866735)

    And please stop claiming "faults" in "LLM reasoning abilities". LLMs have no reasoning abilities and pattern matching is not a valid substitute.

    • I think it's fair for Apple to point this out and use "LLM reasoning abilities". You're right in what you're saying, but when you have people who are claiming they're on the path to making "General Artificial Intelligence", or we're "4 years away from AI that will eliminate 50% of jobs", the suggestion is that the AI is actually able to reason; that it's truly intelligent. So it's good that someone with the right resources and the ability to know what's going on can use the language of those hyping AI and
      • by gweihir ( 88907 )

        Hmm. I do admit I sometimes forget the low "reasoning ability" level many people operate on.

    • by war4peace ( 1628283 ) on Tuesday October 15, 2024 @03:53PM (#64866827)

      Question is, do Slashdot editors have enough reasoning abilities, considering the dupefest here?

      • by sconeu ( 64226 )

        Maybe Apple could do a study revealing the critical flaws in Slashdot editors' "reasoning" abilities?

      • by phfpht ( 654492 )
        NO. But, being just a bad as humans is not a validation of generative AI.
    • The headline/description is garbage.
      but
      Apple needs to temper peoples expectations when Sam Altman is writing things like:
      " ... it’s very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power"
      and
      "We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks."
      Even Mira Murati's papers point in th

  • I expect we'll see a response from Sam Altman and his ilk within days talking about how reasoning ability is overrated anyway, and the artificial intelligence is superior to supposed "real" intelligence on such a level that we simply aren't equipped to understand the reasoning ability of such a superior creation.

    My god, this is stupid. Reasoning ability in LLMs? Just as well say every database in existence has reasoning ability just because you can type a somewhat english looking phrase in (SELECT * FROM $F

  • The non deterministic nature of AI language models make it impossible to make guarantees and its results cannot be insured financially or legally. For example, If the AI sends a 1 in a million mass email that is highly offensive, the AI producer/maintainer probably has language stating they're not liable.
    • by dfghjk ( 711126 )

      What "non deterministic nature"? And why are "guarantees" of "results" important?

      "For example, If the AI sends a 1 in a million mass email that is highly offensive, the AI producer/maintainer probably has language stating they're not liable."

      They'll have that anyway. It's a problem of legal accountability, not a characteristic of LLMs that you cannot accurately describe.

    • I believe that they are fully deterministic, but generation runs are seeded with random numbers intentionally.

      • 2+2=4. Fully deterministic. Always yields same results given 2+2=? as input.

        Vs.

        LLM given same user input multiple times yielding different results each time? Non deterministic.

        By definition if a random number generator is a key part of your algorithm, it is not deterministic. This should be self evident.

        • I'll use image generators as an example because even though it's a different algorithm, they work in a lot of the same ways.

          You put in a text prompt and get a different image every time, right? No. You can re-run the same prompt with the same seed and get exactly the same picture out of it. You just have to have control over the model to enable that. So maybe not Bing Image Generator but definitely Stable Diffusion.

          It's pseudorandom numbers, so yes - it's deterministic.

    • All computer programs are deterministic unless they use external phenomena to control their execution. Just because they're so big and complex that we can't easily work out their state doesn't mean they've stopped being deterministic.
  • They did the same a couple of days ago:

    https://apple.slashdot.org/sto... [slashdot.org]

    • Apple is thorough.
      Slashdot editors, not so much.

    • "Hey, LLM, has this article been posted already?"

      See, AI could improve /. Maybe it's only as smart as a cat but if that cat can spot dupes that's something editors miss.

      Humans use cats to hunt mice too. Not because cats are good at anything else but being mean, but they excel at that. Same with LLM pattern matching.

      Apple always shits on tech they're way behind on - until they "revolutionize" it and it's the next best thing. Remember when fanbois were worshiping the Lightning Cable?

      They'll snap-to on AI

  • "Generative AI" is simply not capable of what we would universally consider reasoning. LLMs and other "reflexive" pattern-matching systems may be a stepping stone on the way to AGI, or, they may be a cul-de-sac, and won't have anything at all to do with AGI, if such a thing ever comes to be.

  • I mean, take any formal math proof. You have a set of transformations you can make to existing statements, a set of existing statements, and you apply them to get the form you want. All of this is realizable within a neural network, so any output can only be the product of an input plus a transformation.

  • ...they have NO reasoning ability
    It's all statistics and clever math

  • I ask how much is 3+5?

    If I change just one character, the '3' to '4', I get a completely different answer.

  • When they learn merely by the words that other people have posted, their 'reasoning' can only be a logical calculation within the domain of what other people have said. But 'reasoning' in the way the term is meant means novel thought, and therein lies the rub.
  • by RossCWilliams ( 5513152 ) on Tuesday October 15, 2024 @05:02PM (#64867019)
    Have AI take an IQ test. That's the way we determine "intelligence" in humans. If you want to define it differently you need to come up with a different measure. Or admit you are arguing about an ill-defined term that mostly is used to describe how well someone's thinking conforms to a particular social class.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...