• rowdy@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    I hate AI slop as much as the next guy but aren’t medical diagnoses and detecting abnormalities in scans/x-rays something that generative AI models are actually good at?

    • Mitchie151@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Image categorisation AI, or convolutional neural networks, have been in use since well before LLMs and other generative AI. Some medical imaging machines use this technology to highlight features such as specific organs in a scan. CNNs could likely be trained to be extremely proficient and reading X-rays, CT, MRI scans, but these are generally the less operator dependant types of scan, though they can get complicated. An ultrasound for example is highly dependent on the skill of the operator and in certain circumstances things can be made to look worse or better than they are.

      I don’t know why the technology hasn’t become more widespread in the domain. Probably because radiologists are paid really well and have a vested interest in preventing it… they’re not going to want to tag the images for their replacement. It’s probably also because medical data is hard to get permission for, to ethically train such a model you would need to ask every patient in for every type of scan it their images can be used for medical research which is just another form/hurdle to jump over for everyone.

    • medgremlin@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      They don’t use the generative models for this. The AI’s that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah, those models are referred to as “discriminative AI”. Basically, if you heard about “AI” from around 2018 until 2022, that’s what was meant.

        • medgremlin@midwest.social
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          The discriminative AI’s are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else’s.

          Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don’t trust non-human things to actually make decisions.

          • sobchak@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            They are black boxes, and can even use the same NN architectures as the generative models (variations of transformers). They’re just not trained to be general-purpose all-in-one solutions, and have much more well-defined and constrained objectives, so it’s easier to evaluate how their performance may be in the real-world (unforeseen deficiencies, and unexpected failure modes are still a problem though).