Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • @intensely_human@lemm.ee
    link
    fedilink
    111 months ago

    Not sure what you mean by passive. It takes a hell of a lot of electricity to train one of these LLMs so something is happening actively.

    I often interact with ChatGPT 4 as if it were a child. I guide it through different kinds of mental problems, having it take notes and evaluate its own output, because I know our conversations become part of its training data.

    It feels very much like teaching a kid to me.

    • luciole (he/him)
      link
      fedilink
      9
      edit-2
      11 months ago

      I mean passive in terms of will. Computers want and do nothing. They’re machines that function according to commands.

      The way you feel like teaching a child when you feed input in natural language to a LLM until you’re satisfied with the output is known as the ELIZA effect. To quote Wikipedia:

      In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”