I’ll admit I’m often verbose in my own chats about technical issues. Lately they have been replying to everyone with what seems to be LLM generated responses, as if they are copy/pasting into an LLM and copy/pasting the response back to others.

Besides calling them out on this, what would you do?

  • stoy@lemmy.zip
    link
    fedilink
    arrow-up
    30
    ·
    15 hours ago

    If they are copying OPs messages straight into a chatbot, this could absolutely be a serious security incident, where they are leaking confidential data

    • Bongles@lemm.ee
      link
      fedilink
      arrow-up
      8
      arrow-down
      3
      ·
      13 hours ago

      It depends, if they’re using copilot through their enterprise m365 account, it’s as protected as using any of their other services, which companies have sensitive data in already. If they’re just pulling up chatgpt and going to town, absolutely.