Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

  • Ledivin@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    2 days ago

    AI hallucinates constantly, that’s why you still have a job - someone has to know what they’re doing to sort out the wheat from the chaff.

    It’s also taking a ton of our entry-level jobs, because you can do the work you used to do and the work of the junior devs you used to have without breaking a sweat.

    • Monounity@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      7
      ·
      2 days ago

      But that’s point of my post, how can they take junior devs jobs if they’re all hallucinating constantly? And let me tell you, we’re hiring juniors.

      • Ledivin@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        2 days ago

        And let me tell you, we’re hiring juniors.

        Sure, nobody has stopped hiring, but everyone has slowed down, and we’ve seen something like 5% of our workforce laid off over the past year. FAANG has hired less than one fifth the junior devs as previous years.

          • Ledivin@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            2 days ago

            That’s certainly possible - the only data I have is US-based, primarily from SF and NYC, but our smaller hubs are also following similar trends.

            • Monounity@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              2 days ago

              It’s bad over there, isn’t it? In your opinion, are LLM’s causing the downward trend in the job market?

              • Ledivin@lemmy.world
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                2 days ago

                Depends what you mean. Hiring at entry-levels has absolutely stalled, but I’ve been at the same shop for 5-10 years, so I’m mostly insulated. The shops that use AI well and those that don’t are going to be very obvious over the next few years. I’m definitely worried for the next 5-10 years of our careers, our jobs have changed SO much in the past year.

                • Monounity@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  2 days ago

                  Where I live, they keep pushing the retirement age upwards, so I’m looking at working until I die at the ripe age of 79 or something

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 days ago

        I think your question is covered by the original commentator. They do hallucinate often, and the job does become using the tool more effectively which includes capturing and correcting those errors.

        Naturally, greater efficiency is an element of job reduction. They can be both hallucinating often and creating additional efficiency that reduces jobs.

        • Monounity@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          9
          ·
          2 days ago

          But they’re not hallucinating when I use them? Are you just repeating talking points? It’s not like the code I write is somehow connected with an AI, I just bounce my code off of an LLM. And when I’m done reviewing each line, adding stuff, checking design docs etc, no one could tell that an LLM was ever used for creating that piece of code in the first place. To this date I’ve never failed a code review on “that’s AI slop, please remove”.

          I’d argue that greater efficiency sometimes gives me more free time, hue hue

          • henfredemars@infosec.pub
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            And that’s fantastic! That’s what technology is supposed to do IMHO - Give you more free time because of that efficiency. That’s technology making life better for humans. I’m glad that you’re experiencing that.

            If they’re not hallucinating as you use them, then I’m afraid we just have different experiences. Perhaps you’re using better models or you’re using your tools more effectively than I am. In that case, I must respect that you are having a different and equally legitimate experience.