OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • WhirlpoolBrewer@lemmings.world
    link
    fedilink
    arrow-up
    8
    ·
    3 days ago

    If the LLM could reason, shouldn’t it be able to say “my token training prevents me from understanding the question as asked. I don’t know how many 'r’s there are in Strawberry, and I don’t have a means of finding that answer”? Or at least something similar right? If I asked you what some word in a language you didn’t know, you should be able to say “I don’t know that word or language”. You may be able to give me all sorts of reasons why you don’t know it, and that’s all fine. But you would be aware that you don’t know and would be able to say “I don’t know”.

    If I understand you correctly, you’re saying the LLM gets it wrong because it doesn’t know or understand that words are built from letters because all it knows are tokens. I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that. I assert that it doesn’t know that it doesn’t know what letters are, because it is incapable of coming to that judgement about its own knowledge and limitations.

    Being able to say what you know and what you don’t know are critical to being able to solve logic problems. Knowing which information is missing and can be derived from known things, and which cannot be derived is key to problem solving based on reason. I still assert that LLMs cannot reason.

    • Kuinox@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      2 days ago

      I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that.

      That is of course a big problem. They try to guess too much stuff, but it’s also why it kinda works. Symbolics AI have the opposite problem, they are rarely useful, because they can’t guess stuff, they are rooted in hard logic, and cannot come up with a reasonable guess.
      Now humans also try to guess stuff and sometimes get it wrong, it’s required in order to produce results from our thinking and not be stuck in a state where we don’t have enough data to do anything, like a symbolic AI.

      Now, this is becoming a spectrum, humans are somewhere in the middle of LLMs and symbolics AI.
      LLMs are not completely unable to say what they know and doesnt know, they are just extremely bad at it from our POV.

      The probleme with “does it think” is that it doesn’t give any quantity or quality.

      • WhirlpoolBrewer@lemmings.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Is the argument that LLMs are thinking because they make guesses when they don’t know things combined with no provided quantity or quality to describe thinking?

        If so, I would suggest that the word “guessing” is doing a lot of heavy lifting here. The real question would be “is statistics guessing”? I would say guessing and statistics are not the same thing, and Oxford would agree. An LLM just grabs tokens based on training data on what word or token most likely comes next, it will just be using what the statistically most likely next token or word is. I don’t think grabbing the highest likely next token counts as guessing. That feels very algorithmic and statistical to me. It is also possible I’m missing the argument still.

        • Kuinox@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Is the argument that LLMs are thinking because they make guesses

          No, it’s that you can’t root the argument that they don’t think over the fact they make stuff up, because humans too. You could root it in the amount of things it guess wrong, but it’s extremely hard to measure.
          Again, I’m not claiming that they think, but that we don’t know until one or the other is proven.
          Right now, thinking one, or the other is true, is belief.

          • WhirlpoolBrewer@lemmings.world
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            I think you can make a strong argument that they don’t think rooted in words should mean something and that statistics and thinking don’t mean the same thing. To me, that feels like a fairly valid argument.

            • Kuinox@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 day ago

              So you think you need words to be able to think ? Monkeys, birds, human babies are unable to think then ?

              • WhirlpoolBrewer@lemmings.world
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                My apologies, I was too vague. I’m saying “thinking” by definition is not “statistics”. Where Monkeys, birds, and human babies all “think”, LLMs use algorithms and “statistics”. I also think that “statistics” not meaning the same thing that “thinking” is a valid argument. I would go farther and say it’s important that words have meaning. That is what I was attempting to convey. I’m happy to clear up anything I was unclear about.

                • Kuinox@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 day ago

                  You are mistaking how LLMs are trained to how they work.
                  It’s not because it’s been trained with statistics, that they compute, or think using statistics.
                  For example, to do additions, internally LLMs do trignonometry: https://arxiv.org/abs/2502.00873
                  They do probably use statistics for tons of stuff internally, but humans do too: guessing, bias, tendency, preferences.
                  Anthropics researcher found that their LLMs have “features” for concepts.

                  • WhirlpoolBrewer@lemmings.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 day ago

                    I don’t think you can disconnect how an LLM was trained from how it operates. If you train an LLM to use trigonometry to solve addition problems, I think you will find the LLM will do trigonometry to solve addition problems. If you train an LLM in only Russian, it will speak Russian. I would suggest that regardless of what you train it on it will choose the statistically most likely next token based on its training data.

                    I would also suggest we don’t know the exact training data being used on most LLMs, so as outsiders we can’t say one way or another on how the LLM is being trained to do anything. We can try to extrapolate from posts like the one that you linked to how the LLM was trained though. In general if that is how the LLM is coming to its next token, then the training data must be really heavily weighted in that manner.