What a colorful mischaracterization. It sounds clever at face value but it’s really naive. If anything about this is deceptive, it’s the lengths that people go to to slander what they dislike.
That is exactly what’s going on here. Or they hate it enough that they don’t mind making stuff up or mischaracterizing what it does. Seems to be a common thread on the Fediverse. It’s not the first time this week I’ve seen it.
Actually content laundering is the best term I’ve heard to describe the process. Just like money laundering, you no longer know the source and know it’s technically legal to use and distribute.
I mean, if the copyrighted content wasn’t so critical, they would train models without it. Their essentially derivative works, but no one wants to acknowledge it because it would either require changing our copyright laws or make this potentially lucrative and important work illegal.
Content laundering is not a good way to describe it because it’s misleading as it oversimplifies and mischaracterizes what a language model actually does. It’s a fundamental misunderstanding of how it works. Training language models is typically a transparent and well-documented process as described by the mountains of research over the past decades. The real value comes from the weights of the nodes in the neural network and not the source that it spits out in its entirety when it was trained. The source material is evaluated and wholly transformed into new data in the form of nodes and weights. The original content does not exist as it was within the network because there’s no way to encode it that way. It’s a statistical system that compounds information.
And while LLMs do have the capacity to create derivative works in other ways, it’s not all that they do, or what they always do. It’s only one of the many functions that it has. What you say would probably be true if it was only trained on a single source, but that’s not even feasible. But when you train it on millions of sources, what remains are the overall patterns of language within those works. It’s much more sophisticated and flexible than what you describe.
So no, if it was cut and dry there would be grounds for a legitimate lawsuit. The problem is that people are arguing points that do not apply but sound reasonable when they haven’t seen a neural network work under the hood. If anything, new laws need to be created to address what LLMs do if you’re so concerned about proper compensation.
So it’s content laundering
What a colorful mischaracterization. It sounds clever at face value but it’s really naive. If anything about this is deceptive, it’s the lengths that people go to to slander what they dislike.
I feel most people critical of AI don’t know how a neural network works…
That is exactly what’s going on here. Or they hate it enough that they don’t mind making stuff up or mischaracterizing what it does. Seems to be a common thread on the Fediverse. It’s not the first time this week I’ve seen it.
Actually content laundering is the best term I’ve heard to describe the process. Just like money laundering, you no longer know the source and know it’s technically legal to use and distribute.
I mean, if the copyrighted content wasn’t so critical, they would train models without it. Their essentially derivative works, but no one wants to acknowledge it because it would either require changing our copyright laws or make this potentially lucrative and important work illegal.
Content laundering is not a good way to describe it because it’s misleading as it oversimplifies and mischaracterizes what a language model actually does. It’s a fundamental misunderstanding of how it works. Training language models is typically a transparent and well-documented process as described by the mountains of research over the past decades. The real value comes from the weights of the nodes in the neural network and not the source that it spits out in its entirety when it was trained. The source material is evaluated and wholly transformed into new data in the form of nodes and weights. The original content does not exist as it was within the network because there’s no way to encode it that way. It’s a statistical system that compounds information.
And while LLMs do have the capacity to create derivative works in other ways, it’s not all that they do, or what they always do. It’s only one of the many functions that it has. What you say would probably be true if it was only trained on a single source, but that’s not even feasible. But when you train it on millions of sources, what remains are the overall patterns of language within those works. It’s much more sophisticated and flexible than what you describe.
So no, if it was cut and dry there would be grounds for a legitimate lawsuit. The problem is that people are arguing points that do not apply but sound reasonable when they haven’t seen a neural network work under the hood. If anything, new laws need to be created to address what LLMs do if you’re so concerned about proper compensation.