• @AmbientChaos@sh.itjust.works
    link
    fedilink
    English
    105 months ago

    Consumer software running on a consumer OS should not be grabbing all available RAM just because. Doing so will cause other applications to be moved to swap and have to be loaded back into RAM when the user goes to use them. In a server environment doing something like running a SQL server it would make more sense to grab all available RAM and start aggressively caching frequently accessed data in RAM to present it sooner with the assumption that the server’s primary role is to perform SQL operations as quickly as possible.

    Specifically with Photoshop what would be the benefit of it be aggressively reserving RAM beyond what is needed to function?

      • @9bananas@lemmy.world
        link
        fedilink
        English
        55 months ago

        this is not true.

        it entirely depends on the specific application.

        there is no OS-level, standardized, dynamic allocation of RAM (definitely not on windows, i assume it’s the same for OSX).

        this is because most programming languages handle RAM allocation within the individual program, so the OS can’t allocate RAM however it wants.

        the OS could put processes to “sleep”, but that’s basically just the previously mentioned swap memory and leads to HD degradation and poor performance/hiccups, which is why it’s not used much…

        so, no.

        RAM is usually NOT dynamically allocated by the OS.

        it CAN be dynamically allocated by individual programs, IF they are written in a way that supports dynamic allocation of RAM, which some languages do well, others not so much…

        it’s certainly not universally true.

        also, what you describe when saying:

        Any modern OS will allocate RAM as necessary. If another application needs, it will allocate some to it.

        …is literally swap. that’s exactly what the previous user said.

        and swap is not the same as “allocating RAM when a program needs it”, instead it’s the OS going “oh shit! I’m out of RAM and need more NOW, or I’m going to crash! better be safe and steal some memory from disk!”

        what happens is:

        the OS runs out of RAM and needs more, so it marks a portion of the next best HD as swap-RAM and starts using that instead.

        HDs are not built for this use case, so whichever processes use the swap space become slooooooow and responsiveness suffers greatly.

        on top of that, memory of any kind is built for a certain amount of read/write operations. this is also considered the “lifespan” of a memory component.

        RAM is built for a LOT of (very fast) R/W operations.

        hard drives are NOT built for that.

        RAM has at least an order of magnitude more R/W ops going on than a hard drive, so when a computer uses swap excessively, instead of as very last resort as intended, it leads to a vastly shortened lifespan of the disk.

        for an example of a VERY stupid, VERY poor implementation of this behavior, look up the apple M1’s rapid SSD degradation.

        short summary:

        apple only put 8GB of RAM into the first gen M1’s, which made the OS use swap memory almost continuously, which wore out the hard drive MUCH faster than expected.

        …and since the HD is soldered onto the Mainboard, that completely bricks the device in about half a year/year, depending on usage.

        TL;DR: you’re categorically and objectively wrong about this. sorry :/

        hope you found this explanation helpful tho!