- cross-posted to:
- privacy@lemmy.ml
- cross-posted to:
- privacy@lemmy.ml
Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.
You fix this problem with locally-run models that do not send your conversations to a cloud provider. That is the only real technical solution.
Unfortunately, the larger models are way too big to run client-side. You could launder your prompts through a smaller LLM to standardize phrasing (e.g. removing idiosyncrasies or local dialects), but there’s only so far you can go with that, because language is deeply personal, and the things people will use chatbots for are deeply personal.
This is by no means exclusive to LLMs, of course. Google has your lifetime search history and they can glean all kinds of information from that alone. If you’re older than ~30 or so, you might remember these same conversations from when Gmail first launched. You’d have to be crazy to let Google store all your personal emails for all eternity! And yet everybody does it (myself included, though I’m somewhat ashamed to admit it).
This same problem exists with pretty much any cloud service. When you send data to a third party, they’re going to have that data. And I guarantee you are leaking more information about yourself than you realize. You can even tell someone’s age and gender with fairly high accuracy from a small sample of their mouse movements.
I wonder how much information I’ve leaked about myself from this comment alone…
There is some hope. Mistral is pretty incredible for it’s size, and it’s a 7b model. There are finetunes on top of that which makes it even better - my favorite right now is Open Hermes 2
There’s still room for improvement, and it’s getting better and better.
Memory isn’t that expensive… NVIDIA generally only gives you a lot of it if you also buy a huge amount of compute (which is expensive), but there are other hardware manufacturers (e.g. Apple) that offer lots of memory with a modest amount of computer power and they run these models with great performance on hardware that doesn’t break the bank.
Now that there’s a mass market use case for a lot of memory with a modest amount of compute power, I expect other hardware manufacturers will catch up to Apple and ship offerings of their own.
There are other email providers…
Yes, I am impressed with Apple Silicon, and the fact that I can get up to 192GB of integrated memory in a laptop is pretty impressive. Unfortunately, the costs are still crazy high (Apple and overpriced RAM: name a more iconic duo), and even 192GB is only about half of what is needed for the largest models I’m aware of (e.g. BLOOM 366B). I don’t think OpenAI has officially stated how big GPT4 is, but it’s likely even bigger.
The industry has been stagnating for a long time now in terms of memory, and I hope this will push prices down and capacity up.
The good news is that there is strong motivation for companies like Apple and Google to shift more processing onto client devices, because the cost of running these servers is freakin’ huge.
You’re right of course. There are even some with a focus on privacy, like Proton Mail. But Gmail and similar services are overwhelmingly dominant, and not just because people are dumb. There is real value in having email that is accessible on any device, for free, with enough storage that you never really need to think about it. Proton offers 1GB for free now, which is pretty solid but a far cry from what Google, Microsoft, or Yahoo (yep they’re still around) provide. I mean, Google offered 1GB almost 20 years ago.
I am personally still in the process of de-googling my life, and the idea of updating every account I ever signed up for using my gmail address is daunting. I’ll probably never get 100% of the way there; for now I am satisfied enough moving my personal correspondence and important accounts onto different email. Eventually I will probably set up my own domain and get a premium Proton plan so I won’t be too tightly tied to any particular email provider. Then if Proton ever enshittifies, I can take my domain and go elsewhere.
I fed your comment to ChatGPT 3.5 (telling it that it was a comment that I had written to avoid triggering any of its “as a large language model blah blah privacy” conditioning) and this is what it said:
So not much from just that comment, but a few tidbits that can be added to a profile that builds up more detail over time.
We were already facing this sort of thing before AI exploded, though. A lot of the various Reddit user analysis services out there were able to get a fair bit right about me based just off of my most recent 1000 comments (though I just checked my profile on RedditMetis and it did get a few significant things wrong, it’s clearly a pretty simple-minded approach to analysis).
Heh. I just checked the link for why RedditMetis thinks I’m transgender and it referenced this comment where I’m literally objecting to RedditMetis’ interpretation that I’m transgender. Citogenesis at work.