• @jordanlund@lemmy.world
    link
    fedilink
    193 months ago

    Roko’s Basilisk. But here’s the thing, once you’re aware of it, you’re fucked. The only solution is to not research it, don’t know anything about it. Live in blissful ignorance.

    • @Naich
      link
      143 months ago

      What about if you read about it and didn’t understand it?

    • @jkrtn@lemmy.ml
      link
      fedilink
      93 months ago

      You have to believe that a malevolent AI will give enough of a damn about you to bother simulating anything at all, let alone infinite torture, which is useless for it to do once it already exists. Everyone on LessWrong has a well-fed ego so I get why they were in a tizzy for a while.

      • @jordanlund@lemmy.world
        link
        fedilink
        13 months ago

        Well one punishes you if you deny it’s existence, the other punishes you if you fail to assist in it’s development. So it’s a LITTLE different. :)

        Fortunately, for me personally, I helped fund a key researcher who could, in theory, be a major contributor to such a thing. So I have plausible deniability. ;) And I’ve been promised a 15 minute head start before he turns it on.

      • @Denjin
        link
        163 months ago

        Silly thought experiment, the result of which, in gullible people could make them potential victims of psychosomatic symptoms like headaches and insomnia.

      • @Nibodhika@lemmy.world
        link
        fedilink
        63 months ago

        It’s essentially a thought experiment, without getting too specific it goes along the lines of “what if there was a hypothetical bad scenario that gets triggered by you knowing about it”, so if you look it up now you’re doomed.