• @PriorityMotif@lemmy.world
    link
    fedilink
    74 months ago

    You can probably find a used workstation/server capable of using 256GB of RAM for a few hundred bucks and get at least a few gpus in there. You’ll probably spend a few hundred on top of that to max out the ram. Performance doesn’t go up much past 4 gpus because the CPU will have a difficult time dealing with the traffic. So for a ghetto build you’re looking at $2k unless you have a cheap/free local source.

    • @areyouevenreal@lemm.ee
      link
      fedilink
      34 months ago

      Without sufficient VRAM it probably couldn’t be GPU accelerated effectively. Regular RAM is for CPU use. You can swap data between both pools, and I think some AI engines do this to run larger models, but it’s a slow process and you probably wouldn’t gain much from it without using huge GPUs with lots of VRAM. PCIe just isn’t as fast as local RAM or VRAM. This means it would still run on the CPU, just very slowly.

    • @AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      14 months ago

      PCIe will probably be the bottleneck way before the number of GPUs is, if you’re planning on storing the model in ram. Probably better to get a high end server CPU.