ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • Prethoryn Overmind
    link
    fedilink
    English
    151 year ago

    Look, I am all for seeing pros and cons. A.I. has a massive benefit to humanity and it has its issues but this article is just silly.

    Why in the fuck are you using ChatGPT to set a cancer plan? When did ChatGPT claim to be a medical doctor.

    Just go see a damn doctor.

    • @Kage520@lemmy.world
      link
      fedilink
      English
      81 year ago

      I have been getting surveys asking my opinion on ai as a healthcare practitioner (pharmacist). I feel like they are testing the waters.

      AI is really dangerous for healthcare right now. I’m sure people are using it to ask regular questions they normally Google. I’m sure administrators are trying to see how they can use it to “take the pressure off” their employees (then fire some employees to “tighten the belt”).

      If they can figure out how to fact check the AI results, maybe my opinion can change, but as long as AI can convincingly lie and not even know it’s lying, it’s a super dangerous tool.

      • Prethoryn Overmind
        link
        fedilink
        English
        -2
        edit-2
        1 year ago

        For me the issue isn’t the tool. It’s people. The tool is used just as it is. A tool.

        I always like to compare these things to other physical tools. If you take a philips screw driver to a flathead screw you don’t blame the tool you blame yourself for bringing the improper tool because as a human you can make mistakes. As a human you should have figured out prior, “do I need a flathead or philips?” There are tools capable of doing the job and doing it properly.

        Same if you are an operator on a piece of machinery. If you take a forklift to destroy a house you probably aren’t going to get very far.

        All of these tools were designed to make life easier and provide a positive to life when doing something but it is how you use the tool that matters.

        The same with a gun. I am not a gun ownership kind of guy because of all the shit human beings that just can’t use one properly or claim to use it properly. Guns get more complicated and so do their use cases but the truth is a gun was designed to kill or defend from being killed (this is not a topic about gun rights just using it as an example.) However, in the hands of the wrong person a gun can kill unintentionally. That isn’t the guns fault after all its design was to kill.

        ChatGPT wasn’t designed to kill, inherently. It wasn’t designed to do anything other than take databases of information and provide what it thinks is correct. If you as a person don’t know how to use it or what to do with it probably and you aren’t seeking actual medical attention or advice from a professional then I think that is the person’s fault.

        ChatGPT can’t make a disclaimer for every little thing. A car on the other hand having a recall issue can. If you want to compare to a faulty part in a car then sure. Modify ChatGPT to just not provide medical advice.

        See tools can be changed midway through. The tool isn’t the problem how the person uses the tool is the issue. Access to that tool and what that tool has access to can be an issue but the great thing about tools is laws can change and tools can change.

        It isn’t the A.I.s fault if your legislature doesn’t care to enforce that change or law. The same legislature that half of Lemmy is opposed to literally all the time. Tools are only good in ways they can be used as well.

        So let’s say for arguments sake the tool is dangerous and in your defense it absolutely can be used dangerously. Do you call upon the government to shut it down just like you would call upon the government to regulate or change gun laws?

        Do you also ignore the positive impacts ChatGPT can have because it is doing something else terribly? Imagine a system that medical professionals do create and they modify a version that does provide good medical advice, accurate, and professional? What then? Is ChatGPT still bad? It’s not out of the realm of possibility. A.I isn’t the enemy because someone’s leadership decided to fire you. Leadership is the enemy. Tools are only as bad as the people using them.

        Or for the sake of a recalled care that can kill they are as bad as the user manufacturing them. I don’t deny you can get a bad car, a bad screwdriver. My point is if you let the bad outweigh the good then you are missing the point. The bad should be handled by people who understand it better and can design laws and tools to enforce better usage to make something less bad. So again don’t blame the tool blame the people that aren’t protecting you with said tool.

    • @clutch@lemmy.ml
      link
      fedilink
      English
      31 year ago

      The issue is hospital administrators thinking that AI is the answer to boost profits