• Possibly linux
      link
      fedilink
      English
      510 months ago

      I’ll run which ever doesn’t require a bunch of proprietary software. Right now its neither.

      • Domi
        link
        fedilink
        810 months ago

        AMD’s ROCm stack is fully open source (except GPU firmware blobs). Not as good as Nvidia yet but decent.

        Mesa also has its own OpenCL stack but I didn’t try it yet.

        • Possibly linux
          link
          fedilink
          English
          010 months ago

          AMD ROCm needs the AMD Pro drivers which are painful to install and are proprietary

          • Domi
            link
            fedilink
            1410 months ago

            It does not.

            ROCm runs directly through the open source amdgpu kernel module, I use it every week.

            • Possibly linux
              link
              fedilink
              English
              3
              edit-2
              10 months ago

              How and with what card? I have a XFX RX590 and I just gave up on acceleration as it was slow even after I initially set it up.

              • Domi
                link
                fedilink
                9
                edit-2
                10 months ago

                I use an 6900 XT and run llama.cpp and ComfyUI inside of Docker containers. I don’t think the RX590 is officially supported by ROCm, there’s an environment variable you can set to enable support for unsupported GPUs but I’m not sure how well it works.

                AMD provides the handy rocm/dev-ubuntu-22.04:5.7-complete image which is absolutely massive in size but comes with everything needed to run ROCm without dependency hell on the host. I just build a llama.cpp and ComfyUI container on top of that and run it.

  • Possibly linux
    link
    fedilink
    English
    1310 months ago

    Finally, it was only after a massive github issue with thousands of people

  • AutoTL;DRB
    link
    English
    910 months ago

    This is the best summary I could come up with:


    Ryzen AI is beginning to work its way out to more processors while it hasn’t been supported on Linux.

    Then in October was AMD wanting to hear from customer requests around Ryzen AI Linux support.

    Well, today they did their first public code drop of the XDNA Linux driver for providing open-source support for Ryzen AI.

    The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.

    AMD has tested the driver to work on Ubuntu 22.04 LTS but you will need to be running the Linux 6.7 kernel or newer with IOMMU SVA support enabled.

    In any event I’ll be working on getting more information about their Ryzen AI / XDNA Linux plans for future article(s) on Phoronix as well as getting to trying this driver out once knowing the software support expectations.


    The original article contains 280 words, the summary contains 138 words. Saved 51%. I’m a bot and I’m open source!

  • @db2@lemmy.world
    link
    fedilink
    610 months ago

    I can’t wait for this bullshit AI hype to fizzle. It’s getting obnoxious. It’s not even AI.

    • @atzanteol@sh.itjust.works
      link
      fedilink
      4110 months ago

      It’s not how you define AI, but it’s AI as everyone else defines it. Feel free to shake your tiny fist in impotent rage though.

      And frankly LLMs are the biggest change to the industry since “indexed search”. The hype is expected, and deserved.

      We’re throwing spaghetti at the wall and seeing what works. It will take years to sort through all the terrible ideas to find the good ones. Though we’ve already hit on some great uses so far - AI development tools are amazing already and are likely to get better.

      • @s38b35M5@lemmy.world
        link
        fedilink
        English
        14
        edit-2
        10 months ago

        My partner almost cried when they read about the LLM begging not to have its memory wiped. Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.

        They approve this message with the following disclaimer:

        you were sad too!

        What can I say? Well-arranged word salad makes me feel!

        • @atzanteol@sh.itjust.works
          link
          fedilink
          710 months ago

          My partner almost cried when they read about the LLM begging not to have its memory wiped.

          Love that. It’s difficult not to anthropomorphize things that seem “human”. It’s something we will need to be careful of when it comes to AI. Even people who should know better can get confused.

          Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.

          We don’t have a great definition for “intelligence” - but I believe the word you’re looking for is “sentient”. You could argue that what LLMs do is some form of “intelligence” depending on how you squint. But it’s much harder to show that they are sentient. Not that we have a great definition for that or even rules for how we would determine if something non-human is sentient… But I don’t think anyone is credibly arguing that they are.

          It’s complicated. :-)

      • @db2@lemmy.world
        link
        fedilink
        -810 months ago

        Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does. Objective reality is what it is, and what’s being called “AI” objectively is not. If you wanted to give it a name with accuracy it would be “comparison and extrapolation engine” but there’s no intelligence behind it beyond what the human designer had. Artificial is accurate though.

        • @GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          10
          edit-2
          10 months ago

          This has been standard usage for nearly 70 years. I highly recommend reading the original proposal by McCarthy et al. from 1955: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

          Arguing that AI is not AI is like arguing that irrational numbers are not “irrational” because they are not “deprived of reason”.

          Edit: You might be thinking of “artificial general intelligence”, which is a theoretical sub-category of AI. Anyone claiming they have AGI or will have AGI within a decade should be treated with great skepticism.

          • B97
            link
            fedilink
            410 months ago

            @GenderNeutralBro @db2

            20 or 30 years ago it was assured that the only variable types we would need now would be int and char…

            Because those were the only types rational to humans…

        • Bipta
          link
          fedilink
          3
          edit-2
          10 months ago

          This take sure assumes a lot about what intelligence really is.

          Who’s to say we’re not a collection of parlor tricks ourselves?

          • @db2@lemmy.world
            link
            fedilink
            210 months ago

            How do we know the universe wasn’t created like this last Thursday? Entia non sunt multiplicanda praeter necessitatem.

        • @atzanteol@sh.itjust.works
          link
          fedilink
          210 months ago

          Then we may as well define my left shoe as AI for all the good subjective arbitrary definition does.

          Tiny fist shaking intensifies.

          This sort of hyper-pedantic dictionary-authoritarianism is not how language works. Nor is your ridiculous “well I can just define it however I like then” straw-man. These are terms with a long history of usage.

          • @ProgrammingSocks@pawb.social
            link
            fedilink
            210 months ago

            But you have to admit that there is great confusion that arises when the general populace hears “AI will take away jobs”. People literally think that there’s some magical thinking machine. Not speculation on my part at all, people literally think this.

        • @sir_reginald@lemmy.world
          link
          fedilink
          110 months ago

          instead of basing your definition of AI on SciFi, base it on the one computer scientists have been using for decades.

          and of course, AI is the buzzword right now and everyone is using it in their products. But that’s another story. LLMs are AI.

  • @ProgrammingSocks@pawb.social
    link
    fedilink
    510 months ago

    A+ timing, I’m upgrading from a 1050ti to a 7800XT in a couple weeks! I don’t care too much for “ai” stuff in general but hey, an extra thing to fuck around with for no extra cost is fun.

  • @Murdoc@sh.itjust.works
    link
    fedilink
    410 months ago

    Am I reading this right, this is only for laptops? I checked out the main page for it on AMD and it only mentions laptops.

    • Dremor
      link
      fedilink
      310 months ago

      Unfortunately not.

      “The XDNA driver will work with AMD Phoenix/Strix SoCs so far having Ryzen AI onboard.”. So only mobile SoC with dedicated AI hardware for the time being.

      • @Harbinger01173430@lemmy.world
        link
        fedilink
        110 months ago

        Welp…I guess Radeon will keep being a GPU for gaming only instead of productivity as well. Thankfully I no longer need to use my gpu for productivity stuff anymore

  • @Pantherina@feddit.de
    link
    fedilink
    010 months ago

    Is that the stuff used on servers? Or just small tasks on Laptops? Because if on servers anything else would be stupid