• scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 days ago

    Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this… stuff).

    Some snarky comments, not because it wasn’t a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny and I need to vent

    You’ll see him quoted in the press as an “AI researcher” or similar.

    Or decision theorist! With an entire one decision theory paper that he didn’t bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.

    What Yudkowsky actually does is write blog posts.

    He also writes fanfiction!

    I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!

    Yeah this rabbit hole is deep.

    The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.

    Yeah in hindsight the large number of ex-Christians it attracts makes sense.

    And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

    He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming

    There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won.

    I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn’t get any traction to change anything and the second manifest conference was just as bad or worse.

    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      decision theory is when there’s a box with money but if you take the box it doesn’t have money

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        If your decision theory can’t address weird totally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?

        • diz@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.

          Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).

          Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).

          Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Yeah, even if computers predicting other computers didn’t require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.

            • diz@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              20 hours ago

              To be entirely honest I don’t even like the arguments against EDT.

              Smoking lesion is hilarious. So theres a lesion that is making people smoke. It is also giving them cancer in some unrelated way which we don’t know, trust me bro. Please bro don’t leave this decision to the lesion, you gotta decide to smoke, it would be irrational to decide not to smoke if the lesion’s gonna make you smoke. Correlation is not causation, gotta smoke, bro.

              Obviously in that dumb ass hypothetical, the conditional probability is conditional on the decision, not on the lesion, and the smoking in cancer cases is conditional on the lesion, not on the decision. If those two were indistinguishable then the right decision would be not to smoke. And more generally, adopting causal models without statistical data to back them up is called “being gullible”.

              The tobacco companies actually did manufacture the data, too, thats where “type-A personality” comes from.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    Text version just came up - excellent read as usual.

    A lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

    AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

    While we’re talking about what rationalists actually believe, I’d be remiss not to mention one deeply unpleasant thing about the rationalist subculture: they are really, really into race and IQ theories and scientific racism. Overwhelmingly.

    Considering the whole thing’s deeply fucking steeped in fascism, I’m not shocked.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      4 days ago

      The bit in the video showing the cites to the race scientist in the race science pseudojournal is from Shane Legg’s 2008 paper. He just happened to reach for the race scientist first, you see.

      I have someone else’s draft here of a writeup nailing a zillion examples of this shit from the wider field of AI since the 1990s, eager for that to finally come out some time.