Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

  • @AnarchoSnowPlow@midwest.social
    link
    fedilink
    295 months ago

    It’s a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.

    The problem isn’t progress, the problem is that some of us disagree with the Idea that what’s being touted is actual progress. The things llms are actually good at they’ve being doing for years (language translations) the rest of it is so inexact it can’t be trusted.

    I can’t trust any llm generated code because it lies about what it’s doing, so I need to verify everything it generates anyway in which case it’s easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.

    I assume that it’s the same way with everything I’m not an expert in. In which case it’s worse than useless to me, I can’t trust anything it says.

    The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.

    That’s not even getting into the security implications of giving shitty software access to all your sensitive data etc.

    • @aksdb@lemmy.world
      link
      fedilink
      65 months ago

      If you are so keen on correctness, please don’t say “LLMs are lying”. Lying is a conscious action of deceiving. LLMs are not capable of that. That’s exactly the problem: they don’t think, they just assemble with probability. If they could lie, they could also produce real answers.