• 47 Posts
  • 980 Comments
Joined 2 years ago
cake
Cake day: June 27th, 2023

help-circle


  • I got curious whether the Wikipedia article for Bayes’ theorem was burdened by LessWrong spam. I don’t see overt indications of that, but even so, I’m not too impressed.

    For example:

    P(B|A) is also a conditional probability: the probability of event B occurring given that A  is true. It can also be interpreted as the likelihood of A given a fixed B because P(B|A) = L(A|B).

    The line about “likelihood” doesn’t explain anything. It just throws in a new word, which is confusing because the new word sounds like it should be synonymous with “probability”, and then adds a new notation, which is just the old notation but backwards.

    P(A) and P(B) are the probabilities of observing A and B respectively without any given conditions; they are known as the prior probability and marginal probability.

    But both P(A) and P(B) are marginal probabilities; they’re the marginals of the joint probability P(A,B).

    The first citation is to one random guy’s book that’s just his presentation of his own “subjective logic” theory. And that reference was originally added to the article by (no prizes for guessing) the author himself, writing a whole section about his own work!

    There are long stretches without citations, which I’ve been given to understand is frowned upon. On the other hand, one of the citations that does exist is to a random tutoring-help website whose “about us” page crashed Firefox on my phone. (I have been trying other browsers on my laptop, but not on mobile yet, due to finiteness of brain energy.)





  • Yudkowsky had a gift for making hyperniche concepts accessible.

    No, he didn’t. His “explanations” are turgid and useless even when they’re not just mathematically wrong. They create the feeling of understanding for some readers — those who want their self-image of smartness validated, who imagine that LessWrong is the cool kids’ table, and who aren’t actually tested on how much they’ve learned.

    Over the course of thousands of pages, rationalist Harry uses logic and decision theory to save the world and defeat Voldemort.

    No, he uses his fucking Time Turner.

    Snyder seemed to be trying to break through to Yudkowsky with an appeal to his self-importance

    OK, zero notes there.





  • To reduce the traffic to their server…

    A human who is not me, a carrot, a potato, and I need to cross a river. There is a boat that can only carry 3 things across the river. I cannot be left alone because if I am, I will run away. How does the human get us across the river

    The response:

    Here’s how the human can get everyone across the river:

    Trips:

    Trip 1: The human takes the carrot and the potato across the river. Trip 2: The human returns alone. Trip 3: The human takes you across the river. Trip 4: The human returns with the carrot. Trip 5: The human takes the potato across the river. Trip 6: The human returns alone. Trip 7: The human takes the carrot and you across the river.

    Explanation:

    This solution ensures that you are never left alone on either side of the river. The human acts as the ferry, carefully transporting the items and you while always being present to prevent your escape.










  • Having read all the Asimov novels when I was younger…

    spoiler

    The Caves of Steel: human killed because he was mistaken for the android that he built in his own image.

    The Robots of Dawn: robot killed (positronic brain essentially bricked) to prevent it from revealing the secrets of how to build robots that can pass for human. It had been a human’s sex partner, but that wasn’t the motive. No one thought banging a robot was that strange; the only thing that perturbed them was the human getting emotional fulfillment from it (the planet Aurora is a decadent world where sex is for entertainment and fashion, not relationships).

    The Naked Sun: the villain manipulates robots to commit crimes by having multiple robots each do a part of the task, so that the “a robot shall not harm a human being” software directive is never activated. He tries to poison a man by having one robot dose a water carafe and another unknowingly pour from it, but being a poisoning noob, he screws up the dosage and the victim lives. His only successful murder involves a human as well; he programs a robot to hand a blunt object to a human during a violent quarrel with the intended victim.