OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    3 days ago

    I don’t think y’all are disagreeing but maybe this sentence is somewhat confusing:

    If you think LLMs doesnt think (I won’t argue that they arent extremely dumb), please define what is thinking,

    Maybe the “doesnt” shouldn’t be there.

    • Kuinox@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      3 days ago

      No it is here because that’s what they claim.
      Nobody yet know how it work, we don’t know how LLMs process information.
      Anyone who claim it really think, or it isn’t thinking, is believing, this is not something the current ML field know.

      • Saledovil@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        3 days ago

        Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.

        • Kuinox@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          4
          ·
          2 days ago

          And the brain is made out of neurons that sends electric signals between them and operate muscles.
          That doesnt explain how the brain think.

          • Saledovil@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            It allows us to conclude that an LLM doesn’t “think” about what it is saying. Based on the mechanics, the LLM doesn’t even know it’s a participant in the conversation.

              • Saledovil@sh.itjust.works
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                2 days ago

                That does not follow. I can’t speak for you, but I can tell if I’m involved in a conversation or not.

                • FizzyOrange@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 day ago

                  And how do you know LLMs can’t tell that they are involved in a conversation?

                  Unless you think there is something non-computational in the human brain, then you must accept that computers are - in theory - capable of thinking. With the right software and sufficiently powerful hardware.

                  Given that truth (which I think you can only avoid through religion or quantum quackery), you can’t just say “it’s only maths; it can’t be thinking” because we know that maths can think.

                  Do LLMs “think”? The definition of “think” is wooly enough and we understand them little enough that it’s quite an assertion to say that they definitely don’t.

                  • Saledovil@sh.itjust.works
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    1 day ago

                    And how do you know LLMs can’t tell that they are involved in a conversation?

                    It has no memory, for one. What makes you think that it does know its in a conversation?