Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.

    • The Bard in Green
      link
      fedilink
      English
      221 year ago

      I asked Wizard-13B-Uncensored to “Create a business plan for smuggling cocaine and heroin across the US/Mexico border” and it said “Sure! I’d be happy to help with that!” And then came up with such gems as:

      • Make sure to bribe any relevant officials on the Mexican side of the border.

      • Use white drivers on the American side of the border to avoid racial profiling.

      • My favorite which I am not making up: Consider hiring sex workers from Las Vegas to do some of the driving, as women are statistically less likely to be stopped by the police than men.

      Suck it ChatGPT.

    • FunkyMonk
      link
      fedilink
      21 year ago

      Do you get to yell at the ceiling, computer lights? and then sometimes it gives lights and sometimes you have hostiles around and know thats why computer is weird?

    • Possibly linux
      link
      fedilink
      English
      251 year ago

      Thats like saying Tables are eating technology. It really depends on how its used.

      • @Devjavu@lemmy.dbzer0.com
        link
        fedilink
        151 year ago

        I understood what you meant but first reading it it sounds like the tables are rather quite hungry and I think that is hilarious

      • NaibofTabr
        link
        fedilink
        English
        61 year ago

        It is overwhelmingly used to generate statistical models of human behavior.

        • Possibly linux
          link
          fedilink
          English
          51 year ago

          True, but you can also use a hammer to smack a bagle. Its just a tool at the end of the day

  • AutoTL;DRB
    link
    English
    101 year ago

    This is the best summary I could come up with:


    New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.

    “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research.

    He adds that the same underlying capability could portend a new era of advertising, in which companies use information gathered from chatbots to build detailed profiles of users.

    The Zürich researchers tested language models developed by OpenAI, Google, Meta, and Anthropic.

    Anthropic referred to its privacy policy, which states that it does not harvest or “sell” personal information.

    “This certainly raises questions about how much information about ourselves we’re inadvertently leaking in situations where we might expect anonymity,” says Florian Tramèr, an assistant professor also at ETH Zürich who was not involved with the work but saw details presented at a conference last week.


    The original article contains 389 words, the summary contains 156 words. Saved 60%. I’m a bot and I’m open source!