AI companies have all kinds of arguments against paying for copyrighted content::The companies building generative AI tools like ChatGPT say updated copyright laws could interfere with their ability to train capable AI models. Here are comments from OpenAI, StabilityAI, Meta, Google, Microsoft and more.

  • @echo64@lemmy.world
    link
    fedilink
    English
    11 year ago

    AI legally can’t create its own copywritable content. Indeed, it can not learn. It can only produce models that we tune on datasets. Those datasets being copywritten content. Im a little tired of the anthropomorphizing of ais. They are statistical models not children.

    No sir, I didn’t copy this book, I trained ten thousand ants to eat cereal but only after running an ink well and then a maze that I got them to move through in a way that deposits the ink where I need it to be in order to copy this book.

    • @abhibeckert@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      The AI isn’t being accused of copyright infringement. Nothing is being anthropomorphized.

      Wether you write a copy of a book with a pen, or type it into a keyboard, or photograph every page, or scan it with a machine learning model is completely irrelevant. The question is - did you (the human using the pen/keyboard/camera/ai model) break the law?

      I’d argue no, but other people disagree. It’ll be interesting to see where the courts side on it. And perhaps more importantly, wether new legislation is written to change copyright law.

    • @Mnemnosyne@sh.itjust.works
      link
      fedilink
      English
      -21 year ago

      It can only produce models that we tune on datasets. Those datasets being copywritten content.

      That’s called learning. You learn by taking in information, then you use that information to produce something new.

      • @echo64@lemmy.world
        link
        fedilink
        English
        71 year ago

        It isn’t. Statistical models do not learn. That’s just how we anthropomorphic them. They bias.

          • @echo64@lemmy.world
            link
            fedilink
            English
            5
            edit-2
            1 year ago

            no, you literally can not. Maybe if you were a techbro that doesn’t really understand how the underlying systems work but you have seen sci-fi and want to use that to describe the current state of technology.

            but you’re still wrong if you try.

            • @Bgugi@lemmy.world
              link
              fedilink
              English
              01 year ago

              Yes, you literally can. At the very deepest level, neural networks work in essentially the same way actual neurons do. All “learning,” artificial or not, is biasing the interconnections and firing rates between nodes “biasing” them for desired outputs.

              Humans are a lot more complicated in terms of size and architecture. Our processing has many more layers of abstraction and processing (understanding, emotion, and who knows what else). But fundamentally the same process is occuring: inputs + rewards = biases. Inputs + biases = outputs.

              • @echo64@lemmy.world
                link
                fedilink
                English
                21 year ago

                At the very deepest level, neural networks work in essentially the same way actual neurons do.

                they do not, neural networks were inspired by neurons, it’s a wild oversimplification of both neural networks and neurons to state that hey work the same way, they do not. This is the kind of thing the sci-fi watching tech bros will say, but it’s incorrect to say.