WorkingLemmy@lemmy.world to Open Source@lemmy.ml · 1 个月前FOSS infrastructure is under attack by AI companiesthelibre.newsexternal-linkmessage-square30linkfedilinkarrow-up1391arrow-down11cross-posted to: technology@beehaw.orglinux@programming.devopensource@jlai.lutechnology@lemmy.worldopensource@programming.devOpenSource@europe.pubtechnology@lemmy.worldtechnology@lemmy.worldhackernews@lemmy.bestiver.se
arrow-up1390arrow-down1external-linkFOSS infrastructure is under attack by AI companiesthelibre.newsWorkingLemmy@lemmy.world to Open Source@lemmy.ml · 1 个月前message-square30linkfedilinkcross-posted to: technology@beehaw.orglinux@programming.devopensource@jlai.lutechnology@lemmy.worldopensource@programming.devOpenSource@europe.pubtechnology@lemmy.worldtechnology@lemmy.worldhackernews@lemmy.bestiver.se
minus-squarebeeng@discuss.tchncs.delinkfedilinkarrow-up14·1 个月前You’d think these centralised LLM search providers would be caching a lot of this stuff, eg perplexity or claude.
minus-squaredroplet6585@lemmy.mllinkfedilinkEnglisharrow-up40arrow-down1·1 个月前There’s two prongs to this Caching is an optimization strategy used by legitimate software engineers. AI dorks are anything but. Crippling information sources outside of service means information is more easily “found” inside the service. So if it was ever a bug, it’s now a feature.
minus-squarejacksilver@lemmy.worldlinkfedilinkarrow-up16·1 个月前Third prong, looking constantly for new information. Yeah, most of these sites may be basically static, but it’s probably cheaper and easier to just constantly recrawl things.
minus-squarefuckwit_mcbumcrumble@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up9·1 个月前They’re absolutely not crawling it every time they nee to access the data. That’s an incredible waste of processing power on their end as well. In the case of code though that does change somewhat often. They’d still need to check if the code has been updated at the bare minimum.
minus-squarebeeng@discuss.tchncs.delinkfedilinkarrow-up2·1 个月前Hashes for cached content. Anyone know what sort of DB makes sense here?
You’d think these centralised LLM search providers would be caching a lot of this stuff, eg perplexity or claude.
There’s two prongs to this
Caching is an optimization strategy used by legitimate software engineers. AI dorks are anything but.
Crippling information sources outside of service means information is more easily “found” inside the service.
So if it was ever a bug, it’s now a feature.
Third prong, looking constantly for new information. Yeah, most of these sites may be basically static, but it’s probably cheaper and easier to just constantly recrawl things.
They’re absolutely not crawling it every time they nee to access the data. That’s an incredible waste of processing power on their end as well.
In the case of code though that does change somewhat often. They’d still need to check if the code has been updated at the bare minimum.
Hashes for cached content. Anyone know what sort of DB makes sense here?