From WABE Politics News:

Some of your doctor’s intelligence might not be inside his or her brain. That’s what members of Georgia House and Senate committees on artificial intelligence learned Friday in a hearing […]

  • Apytele@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    …kinda. I suspect some of what they’re talking about is stuff like MEWS which is basically a calculation based on vital signs, labs, and assessment data that can make sure small changes in patient condition aren’t missed while they’re still small. I’ve actually had the EMR call the rapid response nurse for me based on the blood pressure reading that uploaded directly from the machine which meant I didn’t need to step away from the patient to start getting help in the room. It also factored in the level of consciousness (slightly loopy) that I charted earlier in the shift and immediately put them under suspicion for some kind of shock such as sepsis. There’s also a lot of things in modern EMRs that just remind us of things to make sure routine stuff isn’t missed like “hey this person had a positive MRSA test but there’s no infectious isolation precautions order?” or,“hey one of you did a suicide screening and the patient flagged but there’s no order for someone to be on suicide watch?”

    But you also have times where those warnings lead to alarm fatigue which is where we get so sick of clicking through bullshit warnings that we start ignoring important ones. When I first started working it reminded me of modding morrowind as a kid, sometimes I’d get 20+ error messages about some mesh or other being missing and I’d be clicking through them so fast I’d miss a much more critical one and the whole thing would crash. Its actually a big part of how Radonda Vaught killed that lady at Vanderbilt a while back, the ICU nurses were dismissing that override warning as a part of their daily routine.

    But now we’re getting into the next level of clinical alarm management where we’re starting to prune out the ones causing that fatigue. My last job used an EMR called epic, and epic actually has a little thumbs up / down icon in the top right corner of each warning. I actually wound up getting that suicide watch warning I mentioned before a lot because I worked on psychiatry and around half of my patients were suicidal at any one time. The thing about psychiatry though is that for that same suicide risk score on another unit we don’t need a person assigned to watch the patient usually. We mostly managed suicide risk by making sure that the patient doesn’t have access to the things that they would use to hurt themselves and reserved constant observation for patients that were willing and able to go to that extra mile to obtain or create the means to harm themselves with. I was getting that warning every time I opened the patients chart to put in routine 15min safety checks and it was infuriating and wasting time I didn’t have. So every time I got that warning, I would rate it with the thumbs down and take a second to explain in the little comment box what I just said. After the next time epic was updated, I didn’t see that warning again.

    I guess the TLDR here is that ideally the computers are just augmenting our decision making in ways that make us faster and less prone to human error. In the end, though I think we’re going to find that there is a maximum information processing speed limit / cap for humans. I think a lot of companies are hoping that they can replace enough of human judgment with these systems such that they don’t have to pay for as many working hours when the real focus should be using these tools to make the people we have even better and just making sure those little life altering details don’t get missed. Same as how AI in art shouldn’t be being used to make whole pieces, it should be being used to help artists fill in grass or rock textures more quickly or upscale images. Ime it’s all about the usage and remembering that those tools should be used to help us, not replace us, if that makes sense.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Thank you for the detailed information - that’s very interesting stuff.

      I think a lot of companies are hoping that they can replace enough of human judgment with these systems such that they don’t have to pay for as many working hours

      Yep. And given the nature of our world-famous health care, that’s a given to cause death and suffering - outside of the usual causes.