A group of authors filed a lawsuit against Meta, alleging the unlawful use of copyrighted material in developing its Llama 1 and Llama 2 large language models....
So why are Meta, and say, Sci-Hub are treated so differently? I don’t necessarily disagree, but it’s interesting that we legally attack people who are sharing data altruistically (Sci-Hub gives research away for free so more research can be done, scientific research should be free to the world, because it benefits all of mankind), but when it comes to companies who break the same laws to just make more money, that’s fine somehow.
It’s like trying to improve the world is punished, and being a selfish greedy fucking pig is celebrated and rewarded.
Sci-Hub is so villified, it can be blocked at an ISP level and politicians are pushing for DNS-level blocking. Is anything like that happening to Meta? No? Huh, interesting. I wonder why Meta gets different treatment for similar behavior.
I am willing to defend Meta’s use of this kind of data after the world has changed how they treat entities like Sci-Hub.Until that changes, all you are advocating for is for corporations to be able to break the law and for you altruistic people to be punished. I agree they’re the same, but until the law treats them the same, you’re just giving freebies to giant corporations while fucking yourself in the ass.
So why are Meta, and say, Sci-Hub treated so differently?
They are not. Meta is being sued, just like Sci-Hub was sued. So, one difference is that the suit involving Meta is still ongoing.
In any case, Meta did not create the dataset. IDK if they even shared it. The researcher who did is also being sued. The dataset has been taken down in response to a copyright complaint. IDK if it is available anywhere anymore. So the dataset was treated just like Sci-Hub. The sharing of the copyrighted material was stopped.
Meta downloading these books for AI training seems fairly straight-forward fair use to me. I don’t see how what Meta did is anything like what Sci-Hub did.
So ISPs are blocking Meta for their breaking of copyright?
Because ISPs block Sci-Hub.
No, one of them is having governments trying to kick off the internet, and the other is allowed to continue doing what they’re doing and the worst they’ll face is a fine. Not even close to the same, completely disproportionate. If they were blocking all Meta LLMs until they had removed all copyrighted material, maybe we could say the same.
ISPs may block sites to prevent unauthorized copying. It’s not a punishment for past wrong-doing. I’m not sure about the details, I think this differs a lot between jurisdictions. But basically, as ISPs they are involved in the unauthorized act of copying. Their servers copy the data to the end user/customer. So, they may be on the hook for infringement themselves if they don’t act.
Again, I am not aware of Meta sharing the copyrighted books in question. So, I don’t know what the legal basis for blocking Meta would be. If ISPs block a site without a legal basis, they are probably on the hook for breach of contract.
IDK on what basis the sharing of Meta’s LLMs could be stopped. If anyone could claim copyright it would be Meta itself and they allow sharing them. (I have doubts if AI models are copyrightable under current US law.)
In its lawsuit Wednesday, the Times accused Microsoft and OpenAI of creating a business model based on “mass copyright infringement,” stating that the companies’ AI systems were “used to create multiple reproductions of The Times’s intellectual property for the purpose of creating the GPT models that exploit and, in many cases, retain large portions of the copyrightable expression contained in those works.”
Publishers are concerned that, with the advent of generative AI chatbots, fewer people will click through to news sites, resulting in shrinking traffic and revenues.
The Times included numerous examples in the suit of instances where GPT-4 produced altered versions of material published by the newspaper.
In one example, the filing shows OpenAI’s software producing almost identical text to a Times article about predatory lending practices in New York City’s taxi industry.
But in OpenAI’s version, GPT-4 excludes a critical piece of context about the sum of money the city made selling taxi medallions and collecting taxes on private sales.
In its suit, the Times said Microsoft and OpenAI’s GPT models “directly compete with Times content.”
If the New York Times’ evidence is true (I haven’t seen the evidence, so I can’t comment on veracity), then you can recreate copyrighted works with LLMs, and as such, they’re doing the same thing as the Pirate Bay, distributing copyrighted works without authorization and making money off the venture.
I expect ISPs would get into a lot of legal trouble if they did.
The NYT sued OpenAI and MS. a) That doesn’t involve Meta. b) It’s a claim by the NYT.
Why should ISPs deny their paying customers access to Meta sites or sites hosting LLMs released by Meta? These customers have contracts with their service providers. On what grounds, would ISPs be in the right to stop providing these internet services?
Both Meta and ChatGPT used books3, it’s functionally the same type of case.
Why should ISPs deny their paying customers access to Meta sites or sites hosting LLMs released by Meta? These customers have contracts with their service providers. On what grounds, would ISPs be in the right to stop providing these internet services?
In the countries where ISP blocking happens, its usually because a copyright holder has sued and demanded blocking at the ISP level and has won in court. Then, the government begins the path of working with ISPs to block the site.
Unless you think most governments that do this do it arbitrarily? No, they do it because a copyright holder sued, like the New York Times has. The NYT has not demanded ISP-level blocking, but that does not mean that they couldn’t. I can’t speak to their choice not to do so other than it seems that companies only save that for truly altruistic groups, and rarely do it for other big corporations.
IDK why you believe this. Breaking contracts is illegal. You get sued and have to pay damages. Some contracts, in some jurisdictions, may allow such arbitrary decisions. In other jurisdictions such clauses may be unenforceable.
altruistic groups
Well, that’s not something that copyright law cares about very much. Unfortunately, this community seems very pro-copyright; very Ayn Rand even. You’re not likely to get much agreement for any sensible reforms; quite the opposite. I don’t think arguing that Meta is doing the same as TPB is going to win anyone over. It’s more likely to get people here to call for more onerous and more harmful IP laws.
Both Meta and ChatGPT used books3, it’s functionally the same type of case.
FWIW, no. the NYT case and this is different in some crucial ways.
“Straight-forward” may be too strong regarding these books. If they inadvertently picked up unauthorized copies while scraping the web, that would definitely not be a problem. That’s what search engines do.
The question is if it is a problem that the researchers knowingly downloaded these copyrighted texts. Owners don’t seem to go after downloaders. IDK if there is case law establishing that the mere act of downloading copyrighted material is infringement. I don’t think there’s anything to suggest that knowing about the copyright status should make a difference in civil law.
In any case, researchers must be able to share copyrighted material, not just for AI training but also any other purpose that needs it. If this is not fair use, then common crawl may not be fair use either. IDK if there is case law regarding the sharing of copyrighted materials as research material, rather than for their content. But I find it hard to see how it could not be fair use, as the alternative would be extremely destructive. So even if the download would normally be infringement, I doubt that it is in this case.
Eventually, we are only talking about a single copy of each book. So, even if researchers were forced to purchase these books, all of AI training would yield only a few extra sales for each title. The benefit to the owners would be very small in relation to the damage to the public.
So why are Meta, and say, Sci-Hub are treated so differently? I don’t necessarily disagree, but it’s interesting that we legally attack people who are sharing data altruistically (Sci-Hub gives research away for free so more research can be done, scientific research should be free to the world, because it benefits all of mankind), but when it comes to companies who break the same laws to just make more money, that’s fine somehow.
It’s like trying to improve the world is punished, and being a selfish greedy fucking pig is celebrated and rewarded.
Sci-Hub is so villified, it can be blocked at an ISP level and politicians are pushing for DNS-level blocking. Is anything like that happening to Meta? No? Huh, interesting. I wonder why Meta gets different treatment for similar behavior.
I am willing to defend Meta’s use of this kind of data after the world has changed how they treat entities like Sci-Hub. Until that changes, all you are advocating for is for corporations to be able to break the law and for you altruistic people to be punished. I agree they’re the same, but until the law treats them the same, you’re just giving freebies to giant corporations while fucking yourself in the ass.
They are not. Meta is being sued, just like Sci-Hub was sued. So, one difference is that the suit involving Meta is still ongoing.
In any case, Meta did not create the dataset. IDK if they even shared it. The researcher who did is also being sued. The dataset has been taken down in response to a copyright complaint. IDK if it is available anywhere anymore. So the dataset was treated just like Sci-Hub. The sharing of the copyrighted material was stopped.
Meta downloading these books for AI training seems fairly straight-forward fair use to me. I don’t see how what Meta did is anything like what Sci-Hub did.
So ISPs are blocking Meta for their breaking of copyright?
Because ISPs block Sci-Hub.
No, one of them is having governments trying to kick off the internet, and the other is allowed to continue doing what they’re doing and the worst they’ll face is a fine. Not even close to the same, completely disproportionate. If they were blocking all Meta LLMs until they had removed all copyrighted material, maybe we could say the same.
ISPs may block sites to prevent unauthorized copying. It’s not a punishment for past wrong-doing. I’m not sure about the details, I think this differs a lot between jurisdictions. But basically, as ISPs they are involved in the unauthorized act of copying. Their servers copy the data to the end user/customer. So, they may be on the hook for infringement themselves if they don’t act.
Again, I am not aware of Meta sharing the copyrighted books in question. So, I don’t know what the legal basis for blocking Meta would be. If ISPs block a site without a legal basis, they are probably on the hook for breach of contract.
IDK on what basis the sharing of Meta’s LLMs could be stopped. If anyone could claim copyright it would be Meta itself and they allow sharing them. (I have doubts if AI models are copyrightable under current US law.)
https://www.nytimes.com/2024/01/08/technology/openai-new-york-times-lawsuit.html
If the New York Times’ evidence is true (I haven’t seen the evidence, so I can’t comment on veracity), then you can recreate copyrighted works with LLMs, and as such, they’re doing the same thing as the Pirate Bay, distributing copyrighted works without authorization and making money off the venture.
So far, no ISPs are blocking Meta for this.
I expect ISPs would get into a lot of legal trouble if they did.
The NYT sued OpenAI and MS. a) That doesn’t involve Meta. b) It’s a claim by the NYT.
Why should ISPs deny their paying customers access to Meta sites or sites hosting LLMs released by Meta? These customers have contracts with their service providers. On what grounds, would ISPs be in the right to stop providing these internet services?
Both Meta and ChatGPT used books3, it’s functionally the same type of case.
In the countries where ISP blocking happens, its usually because a copyright holder has sued and demanded blocking at the ISP level and has won in court. Then, the government begins the path of working with ISPs to block the site.
Unless you think most governments that do this do it arbitrarily? No, they do it because a copyright holder sued, like the New York Times has. The NYT has not demanded ISP-level blocking, but that does not mean that they couldn’t. I can’t speak to their choice not to do so other than it seems that companies only save that for truly altruistic groups, and rarely do it for other big corporations.
IDK why you believe this. Breaking contracts is illegal. You get sued and have to pay damages. Some contracts, in some jurisdictions, may allow such arbitrary decisions. In other jurisdictions such clauses may be unenforceable.
Well, that’s not something that copyright law cares about very much. Unfortunately, this community seems very pro-copyright; very Ayn Rand even. You’re not likely to get much agreement for any sensible reforms; quite the opposite. I don’t think arguing that Meta is doing the same as TPB is going to win anyone over. It’s more likely to get people here to call for more onerous and more harmful IP laws.
FWIW, no. the NYT case and this is different in some crucial ways.
They pirated the books. Is that not legally relevant?
“Straight-forward” may be too strong regarding these books. If they inadvertently picked up unauthorized copies while scraping the web, that would definitely not be a problem. That’s what search engines do.
The question is if it is a problem that the researchers knowingly downloaded these copyrighted texts. Owners don’t seem to go after downloaders. IDK if there is case law establishing that the mere act of downloading copyrighted material is infringement. I don’t think there’s anything to suggest that knowing about the copyright status should make a difference in civil law.
In any case, researchers must be able to share copyrighted material, not just for AI training but also any other purpose that needs it. If this is not fair use, then common crawl may not be fair use either. IDK if there is case law regarding the sharing of copyrighted materials as research material, rather than for their content. But I find it hard to see how it could not be fair use, as the alternative would be extremely destructive. So even if the download would normally be infringement, I doubt that it is in this case.
Eventually, we are only talking about a single copy of each book. So, even if researchers were forced to purchase these books, all of AI training would yield only a few extra sales for each title. The benefit to the owners would be very small in relation to the damage to the public.