ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • @fsmacolyte@lemmy.world
    link
    fedilink
    English
    11 year ago

    I haven’t heard of cognitive schema assimilation. That sounds interesting. It sounds like it might fall prey to challenges we’ve had with symbolic AI in the past though.

    • @dx1@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      It’s a concept from psychology. Instead of just a model of linguistic construction, the model has to actually be a comprehensive, data-forged model of reality as far as human observation goes/we care about. In poorly tuned, low-information scenarios, it would fall mostly into the same traps human do (e.g. falling for propaganda or pseudoscientific theories) but, if finely tuned, should emulate accurate theories and even predictive results with an expansive enough domain.