Air Canada must pay damages after chatbot lies to grieving passenger about discount | Airline tried arguing virtual assistant was solely responsible for its own actions::Airline tried arguing virtual assistant was solely responsible for its own actions

  • @NocturnalEngineer@lemmy.world
    link
    fedilink
    English
    289 months ago

    If it was a human agent, surely they would still liable?

    They’re an agent of the company. They’re acting on behalf of the company, in accordance to their policy and procedures. It then becomes a training issue if they were providing incorrect information?

    • @tiramichu@lemm.ee
      link
      fedilink
      English
      339 months ago

      Yes, if it was a human agent they would certainly be liable for the mistake, and the law very much already recognises that.

      That’s my whole point here; the company should be equally liable for the behaviour of an AI agent as they are for the behaviour of a human agent when it gives plausible but wrong information.