• 5 Posts
  • 231 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle

  • Oof on the part of the author though:

    Eliezer Yudkowsky: Nope.

    Algernoq (the blogpost author): I assume this is a “Nope, because of secret author evidence that justifies a one-word rebuttal” or a “Nope, you’re wrong in several ways but I have higher-value things to do than retype the sequences”. (Also, it’s an honor; I share your goal but take a different road.) […]

    Richard_Kennaway: What goal do you understand yourself to share with Eliezer, and what different road?

    Algernoq: I don’t deserve to be arrogant here, not having done anything yet. The goal: I had a sister once, and will do what I can to end death. The road: I’m working as an engineer (and, on reflection, failing to optimize) instead of working on existential risk-reduction. My vision is to build realistic (non-nanotech) self-replicating robots to brute-force the problem of inadequate science funding. I know enough mechanical engineering but am a few years away from knowing enough computer science to do this.






  • Subjectively speaking:

    1. Pre-LLM summaries were for the most part actually short.
    2. They were more directly lifted from human written sources, I vaguely remember lawsuits or the threat of lawsuits by newspapers over google infoboxes and copyright infringement in pre-2019 days, but i couldn’t find anything very conclusive with a quick search.
    3. They didn’t have the sycophantic—hey look at me I’m a genius—overly-(and wrong)-detailed tone that the current batch has.





  • Did you read any of what I wrote? I didn’t say that human interactions can’t be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.


    EDIT:

    To clarify I and maybe put it in terms which are closer to your interpretation.

    With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.

    With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it’s where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There’s nothing below the surface,

    I don’t personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it’s more useful to frame it as respecting other people’s boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.

    And I especially don’t think it healthy to essentially be saying: “have the same expectations of chatbots and service workers”.


    TLDR:

    You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it’s not just about protecting yourself, it’s also about protecting them.

    You should never catch feelings for a chatbot, because they don’t have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).





  • A glorious snippet:

    The movement connected to attracted the attention of the founder culture of Silicon Valley and leading to many shared cultural shibboleths and obsessions, especially optimism about the ability of intelligent capitalists and technocrats to create widespread prosperity.

    At first I was confused at what kind of moron would try using shibboleth positively, but it turns it’s just terribly misquoting a citation:

    Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.

    Also lol at insiting on “exonym” as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn’t really even make sense to use the acronym unless you’re doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)

    It’s a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.