• 5 Posts
  • 155 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • To elaborate on the other answers about alphaevolve. the LLM portion is only a component of alphaevolve, the LLM is the generator of random mutations in the evolutionary process. The LLM promoters like to emphasize the involvement of LLMs, but separate from the evolutionary algorithm guiding the process through repeated generations, LLM is as likely to write good code as a dose of radiation is likely to spontaneously mutate you to be able to breathe underwater.

    And the evolutionary aspect requires a lot of compute, they don’t specify in their whitepaper how big their population is or the number of generations, but it might be hundreds or thousands of attempted solutions repeated for dozens or hundreds of generations, so that means you are running the LLM for thousands or tens of thousands of attempted solutions and testing that code against the evaluation function everytime to generate one piece of optimized code. This isn’t an approach that is remotely affordable or even feasible for software development, even if you reworked your entire software development process to something like test driven development on steroids in order to try to write enough tests to use them in the evaluation function (and you would probably get stuck on this step, because it outright isn’t possible for most practical real world software).

    Alphaevolve’s successes are all very specific very well defined and constrained problems, finding specific algorithms as opposed to general software development









  • He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?

    Literally the only difference between Scott’s beliefs and AI:2027 as a whole is his prophecy estimate is a year or two later. (I bet he’ll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn’t happen in 2028.)

    Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods… as in Kat Woods… as in a member of Nonlinear, the EA “organization” whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for “hiring” an underpaid (really underpaid, like couldn’t afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.




  • I was just about to point out several angles this post neglects but it looks like from the edit this post is just intended to address a narrower question. Among the angles outside the intended question: philanthropy by the ultra-wealthy often serves as a tool for reputation laundering and influence building. I guess the same criticism can be made about a lot of conventional philanthropy, but I don’t think that should absolve EA.

    This post somewhat frames the question as a comparison between EA and conventional philanthropy and foreign aid efforts… which okay, but that is a low bar especially when you look at some of the stuff the US has done with it’s foreign aid.









  • The latest twist I’m seeing isn’t blaming your prompting (although they’re still eager to do that), it’s blaming your choice of LLM.

    “Oh, you’re using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren’t trying the right models, so allow me to educate you with all my prompt fondling experience. You’re trying to make some general point? Clearly you just need to try another model.”