Sammeeeeeee@lemmy.world to Technology@lemmy.worldEnglish · 2 年前Stanford researchers find Mastodon has a massive child abuse material problemwww.theverge.comexternal-linkmessage-square43linkfedilinkarrow-up1113arrow-down158 cross-posted to: technology@lemmy.worldtechnology@lemmy.mlfediverse@lemmy.mltechnology@beehaw.orgtechnews@radiation.party
arrow-up155arrow-down1external-linkStanford researchers find Mastodon has a massive child abuse material problemwww.theverge.comSammeeeeeee@lemmy.world to Technology@lemmy.worldEnglish · 2 年前message-square43linkfedilink cross-posted to: technology@lemmy.worldtechnology@lemmy.mlfediverse@lemmy.mltechnology@beehaw.orgtechnews@radiation.party
minus-squarewhenigrowup356@lemmy.worldlinkfedilinkEnglisharrow-up6·2 年前Shouldn’t it be possible to create open-source bots that use the same databases as the researchers to automatically flag and block that kind of content?
minus-squareozymandias117@lemmy.worldlinkfedilinkEnglisharrow-up5·2 年前Those databases are highly regulated, as they are, themselves CSAM Apple tried to do fuzzy hashes to download them to devices, and it wasn’t able to reliably identify things at all
Shouldn’t it be possible to create open-source bots that use the same databases as the researchers to automatically flag and block that kind of content?
Those databases are highly regulated, as they are, themselves CSAM
Apple tried to do fuzzy hashes to download them to devices, and it wasn’t able to reliably identify things at all