• logicbomb@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn’t figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.

    When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I’m not going to sacrifice my health so that somebody else can keep their job. There’s a lot of other things that I would sacrifice, but not my health.

    • Taleya@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      That’s because the medical one (particularly good at spotting cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.

      LLMs are not AI

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.

    • DarkSirrush@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      iirc the reason it isn’t used still is because even with it being trained by highly skilled professionals, it had some pretty bad biases with race and gender, and was only as accurate as it was with white, male patients.

      Plus the publicly released results were fairly cherry picked for their quality.

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      When it comes to ai I want it to assist. Like I prefer the robotic surgery where the surgeon controls the robot but I would likely skip a fully automated one.

      • logicbomb@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        I think that’s the same point the comic is making, which is why it’s called “The four eyes principle,” meaning two different people look at it.

        I understand the sentiment, but I will maintain that I would choose anything that has the better health outcome.

    • Glytch@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Yeah this is one of the few tasks that AI is really good at. It’s not perfect and it should always have a human doctor to double check the findings, but diagnostics is something AI can greatly assist with.

        • Bronzebeard@lemmy.zip
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          If the AI can spot things a doctor might miss, or take longer to notice. It’s easier to determine if the AI diagnosis is incorrect than to come up with one of your own in the first place.