- 1 Post
- 4 Comments
Joined 3 months ago
Cake day: January 29th, 2025
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
pavodive@lemmy.mlto Firefox@lemmy.ml•Automatically generated page summaries as an experimental feature of Firefox 139 nightlyEnglish14·16 days agoWell, it is sending it off your device to the AI’s API. Luckily it won’t have any id information, such as cookies, screen size, OS, IP, etc.
The problem seems to be with the word luckily.
pavodive@lemmy.mlto Firefox@lemmy.ml•Automatically generated page summaries as an experimental feature of Firefox 139 nightlyEnglish24·16 days agoThis feels like windows recall…
pavodive@lemmy.mlto Asklemmy@lemmy.ml•What's the underrated quote that will stick with you for life?English191·2 months agoI would rather have questions that cannot be answered than answers that cannot be questioned.
Richard Feynman
You didn’t misread. It says something along the lines that being generated locally takes long, and that it could be faster to read the article and summarize it yourself.
Then, there’s the inconvenience of having a small LLM instance installed locally: being small means it’s not very effective, but “small” is not really small… So what could the future bring us?
Exactly! The convenience of a big LLM, that is fast, that is more accurate, at the relative small cost of not being hosted locally. It’s a slippery slope, and as LLMs evolve (both in effectiveness and size), I think we know where it all ends.