• I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Does anyone know if it can run CUDA code? Because that’s the silver bullet ensuring Nvidia dominance in the planet-wrecking servers

    • peppers_ghost@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      llama and pytorch support it right now. CUDA isn’t available on its own as far as I can tell. I’d like to try one out but the bandwidth seems to be ass. About 25% as fast as a 3090. It’s a really good start for them though.

  • uberstar@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    3 days ago

    I kinda want an individual consumer-friendly, low-end/mid-end alternative that can run my games and video editing software for very small projects… so far I’m only eyeing the Lisuan G100, which seems to fit that bill…

    This seems cool though, other than AI, it could be used for distributed cloud computing or something of that sort

  • slacktoid@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Where can I buy this?

    Edit: I realized after I commented this was the product page… My bad. It was more of a take my money now scenario

  • geneva_convenience@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    4 days ago

    For inference only. NVIDIA GPU’s are so big because they can train models. Not just run them. All other GPU’s seem to lack that capacity.

    • lorty@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      ·
      3 days ago

      And training them requires a LOT of VRAM, and this is why they do as much as they can to limit VRAM on their gaming cards: better market segmentation.

        • Aria@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          CUDA is not equivalent to AI training. Nvida offers useful developer tools for using their hardware, but you don’t have to use them. You can train on any GPU or even CPU. The projects you’ve looked at (?) just chose to use CUDA because it was the best fit for what hardware they had on hand, and were able to tolerate the vendor lock-in.

            • Aria@lemmygrad.ml
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              I’m not saying you can deploy these in place of Nvidia cards where the tooling is built with Nvidia in mind. I’m saying that if you’re writing code you can do machine learning projects without CUDA, including training.

              • geneva_convenience@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                For sure you can work around it. But it’s not optimal and requires additional work most people don’t feel like putting in.

          • geneva_convenience@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 days ago

            AMD has been lying about that every year since 2019.

            Last time I checked it didn’t. And it probably still doesn’t.

            People aren’t buying NVIDIA if AMD would work too. The VRAM prices NVIDIA asks are outrageous.

              • geneva_convenience@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 days ago

                Can you train on it too? I tried Pytorch on AMD once and it was awful. They promised mountains but delivered nothing. Newer activation functions were all broken.

                llama.cpp is inference only, for which AMD works great too after converting to ONNX. But training was awful on AMD in the past.

    • Aria@lemmygrad.ml
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I agree with your conclusion, but these are LPDDR4X, not DDR4 SDRAM. It’s significantly faster. No fans should also be seen as a positive, since they’re assuming the cards aren’t going to melt. It costs them very little to add visible active cooling to a 1000+ euro product.

        • Aria@lemmygrad.ml
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          That’s still faster than your expensive RGB XMP gamer RAM DDR5 CPU-only system, and you can depending on what you’re running saturate the buses independently, doubling the speed and matching a 5060 or there about. I disagree that you can categorise the speed as negating the capacity, as they’re different axis. You can run bigger models on this. Smaller models will run faster on a cheaper Nvidia. You aren’t getting 5080 performance and 6x the RAM for the same price, but I don’t think that’s a realistic ask either.

      • Aria@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        You can run llama.cpp on CPU. LLM inference doesn’t need any features only GPUs typically have, that’s why it’s possible to make even simpler NPUs that can still run the same models. GPUs just tend to be faster. If the GPU in question is not faster than an equally priced CPU, you should use the CPU (better OS support).

        Edit: I looked at a bunch real-world prices and benchmarks, and read the manual from Huawei and my new conclusion is that this is the best product on the market if you want to run a model at modest speed that doesn’t fit in 32GB but does in 96GB. Running multiple in parallel seems to range from unsupported to working poorly, so you should only expect to use one.

        Original rest of the comment, made with the assumption that this was slower than it is, but had better drivers:
        The only benefit to this product over CPU is that you can slot multiple of them and they parallelise without needing to coordinate anything with the OS. It’s also a very linear cost increase as long as you have the PCIe lanes for it. For a home user with enough money for one or two of these, they would be much better served spending the money on a fast CPU and 256GB system RAM.

        If not AI, then what use case do you think this serves better?

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          The point is that the GPU is designed for parallel computation. This happens to be useful for graphics, AI, and any other problem that can be expressed as a lot of independent calculations that can be executed in parallel. It’s a completely different architecture from a traditional CPU. This particular card is meant for running LLM models, and it will do it orders of magnitude faster than running this stuff on a CPU.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        5
        ·
        4 days ago

        For 2000$ it “claims” to do 140 TOPS of INT8
        When a Intel Core Ultra 7 265K does 33 TOPS of INT8 for 284$

        Don’t get me wrong, I would LOVE to buy a chinese GPU at a reasonnable price but this isn’t even price competitive with CPUs let alone GPUs.