Humana also using AI tool with 90% error rate to deny care, lawsuit claims::The AI model, nH Predict, is the focus of another lawsuit against UnitedHealth.

  • AutoTL;DRB
    link
    English
    21 year ago

    This is the best summary I could come up with:


    Humana, one the nation’s largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors’ medical judgment and wrongfully deny care to elderly people on the company’s Medicare Advantage plans.

    The lawsuit, filed in the US District Court in western Kentucky, is led by two people who had a Humana Medicare Advantage Plan policy and said they were wrongfully denied needed and covered care, harming their health and finances.

    It is the second lawsuit aimed at an insurer’s use of the AI tool nH Predict, which was developed by NaviHealth to forecast how long patients will need care after a medical injury, illness, or event.

    In November, the estates of two deceased individuals brought a suit against UnitedHealth—the largest health insurance company in the US—for also allegedly using nH Predict to wrongfully deny care.

    Humana did not respond to Ars’ request for comment by the time this story initially published, but a spokesperson has since provided a statement, emphasizing that there is a “human in the loop” whenever AI tools are used.

    In both cases, the plaintiffs claim that the insurers use the flawed model to pinpoint the exact date to blindly and illegally cut off payments for post-acute care that is covered under Medicare plans—such as stays in skilled nursing facilities and inpatient rehabilitation centers.


    The original article contains 1,016 words, the summary contains 225 words. Saved 78%. I’m a bot and I’m open source!