• Flying Squid
    link
    fedilink
    -1310 months ago

    The lawyer claims the joke was made with friends in private, but we don’t know what ‘in private’ means. Does Spain have the authority to automatically flag private Snapchat conversations for terrorist threats? Because I’m dubious. I think it’s much more likely that someone didn’t have their account set private.

    • FuglyDuck
      link
      fedilink
      English
      8
      edit-2
      10 months ago

      Snapchat collects the contents of every message you send, including the pictures, including the text. from their Community Guidelines:

      These Guidelines apply to all content (which includes all forms of communication, like text, images, generative AI, links or attachments, emojis, Lenses and other creative tools) or behavior on Snapchat — and to all Snapchatters. We are particularly sensitive to content or behavior that poses a risk of severe harm to Snapchatters, and reserve the right to take immediate, permanent action against users engaging in such behavior. Additional guidance about what we consider to be severe harm and how we take action against it is available here.

      Taking the link hop to the ‘additional guidance’:

      The safety of Snapchatters is our top priority. We take behavior that threatens the safety of our community very seriously, particularly when the threat of harm is severe. We consider severe harm to include both (1) harms that risk significant damage to the physical or emotional well-being of Snapchatters, and (2) the imminent, credible risk of severe harm, including threats to human life, safety, and well-being. We collaborate with experts, safety groups, and law enforcement on these topics in order to better educate ourselves and our community, and to take appropriate action where these threats may arise on our platform. We consider these types of harms to merit a heightened level of scrutiny, as well as swift, strict, and permanent consequences for violators.

      When we identify Snapchatters engaging in any of the following activities, we immediately disable their accounts and, in some instances, refer the conduct to law enforcement:

      • Activity that involves sexual exploitation or abuse, including sharing child sexual exploitation or abuse imagery, grooming, child or adult sex trafficking, or sexual extortion (sextortion)
      • Attempted selling, exchanging, or facilitating sales of dangerous and illicit drugs
      • Credible, imminent threats to human life, safety, or well-being, which may include violent extremism or terrorism-related activities, human trafficking, specific threats of violence (such as a bomb threat), or other serious criminal activities

      In addition to enforcing stricter consequences for these violations, our internal teams are continually working with experts to better understand how we can detect and limit threats, prevent harm, and stay informed of potentially harmful trends. Our work on this topic is never finished and it will continue to evolve with the needs of our community. We invite you to report a safety concern, visit our Safety Center, or learn more about our efforts to address harmful content and promote wellness.

      Emphasis is mine.

      so snapchat’s content filters picked up on the threat, there may have been an actual human involved, and they sent it to the relevant authorities, complete with all of his details. It’s likely that the relevant authorities puttered around with things for a bit, sending it through the “proper channels” by which time he was on the plane and in the air.

      Even if snapchat didn’t automatically snoop the message, he could have been reported by one of his friends.