Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

  • andioop@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 day ago

    I think it’s both.

    It sits at the fast and cheap end of “pick three: fast, good, and cheap” and society is trending towards “fast and cheap” to the exclusion of “good” to the point it is getting harder and harder to find “good” at all sometimes.

    People who care about the “good” bit are upset, people who want to see stock line go up in the short term without caring about long term consequences keep riding the “always pick fast and cheap” and are impressed by the prototypes LLMs can pump out. So devs get fired because LLMs are faster and cheaper, even if they hallucinate and cause tons of tech debt. Move fast and break things.

    Some devs that keep their jobs might use LLMs. Maybe they accurately assessed what they are trying to outsource to LLMs is so low-skill that even something that does not hit “good” could do it right (and that when it screws up they could verify the mistake and fix it quickly); so they only have to care about “fast and cheap”. Maybe they just want the convenience and are prioritizing “fast and cheap” when they really do need to consider “good”. Bad devs exist too and I am sure we have all seen incompetent people stay employed despite the trouble they cause for others.

    So as much as this looked at first, to me, like the thing where fascists simultaneously portray opponents as weak (pathetic! we deserve to triumph over them and beat their faces in for their weakness) and strong (big threat, must defeat!), I think that’s not exactly what anti-AI folks are doing here. Not doublethink but just seeing everyone pick “fast and cheap” and noticing its consequences. Which does easily map onto portraying AI as weak, pointing out all the mistakes it makes and not replacing humans well; while also portraying it as strong, pointing out that people keep trying to replace humans with AI and that it’s being aggressively pushed at us. There are other things in real life that map onto a simultaneous portrayal as weak and strong: the roach. A baby taking its first steps can accidentally crush a roach, hell if the baby fell on many roaches the roaches all die (weak), but it’s also super hard to end an infestation of them (strong). It is worth checking for doublethink when you see the pattern of “simultaneously weak and strong,” but that is also just how an honest evaluation of a particular situation can end up.