This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

  • @hypna@lemmy.world
    link
    fedilink
    191 year ago

    A truly poor analogy. LLMs don’t remove anything from anywhere. They consume no shared resource.

    It’s been wild watching people flail about searching for arguments for why LLMs should be stopped. I’m not even saying they shouldn’t, just that I haven’t seen a solid argument for it.

    • Spzi
      link
      fedilink
      131 year ago

      As per the article, it goes like this:

      1. AI is trained on publicly available data
      2. AI does not credit or compensate original authors
      3. People don’t like their work being used without
      4. People share less publicly
      5. Public spaces desert

      And simultaneously, AI content of poor quality drowns what is left.

      In terms of arguments, have you heard about control / alignment problem or x-risk?

      • @Bye@lemmy.world
        link
        fedilink
        English
        81 year ago

        Isn’t that true with people too? If I read a bunch of books and then use what I learned to write a new book, I’m not crediting the original authors. If I learn painting techniques from Van Gogh and el Greco, I’m not crediting them either.