Shame mist didn't actually read my link to that.Authors Guild, Inc. v. Google, Inc.
Shame mist didn't actually read my link to that.Authors Guild, Inc. v. Google, Inc.
No but he did tell you very forcefully that there are NO VALID LEGAL ARGUMENTS.Shame mist didn't actually read my link to that.
You're still ignoring that you tried to pass off the href as the image source.You're 100% demonstrably wrong.
Our own forum threads do this all the time. Because of the storage limits on this website, much of the content of any of the picture and gif threads are linked from elsewhere, either to Imgur's backend, or Discord's CDNs, etc, or many other websites. That is why images on older pages frequently break.
Further, any website you visit with ads is generally pulling content from a dozen different domains that serve the ad content.
This isn't AOL. The internet got very fast over the past 15 years. Your browser can load content from dozens of different domains very quickly without you even noticing.
And you're still ignoring the point about the fact that if Paramount wanted all of their served content removed from Google, or removed from the caches of various CDNs, they could make it happen. There are mechanisms to allow this to happen.
I don't know shit about this anyway. The artists in my family never talk about how they learn how to create their art and how they have to deal with copyright and shit.No but he did tell you very forcefully that there are NO VALID LEGAL ARGUMENTS.
You did see that right? I don't know why we're even still talking about it. NO VALID LEGAL ARGUMENTS. lol.
Edit: NO VALID LEGAL ARGUMENTS now with more underlining, because underlining makes you correct.
Authors Guild, Inc. v. Google, Inc.
Screen grabs are snippets. Fucking retard.Yes, thanks for losing the argument.
View attachment 506223
Further, for all of these copywritten books, Google displays the copyright page of the books. It attributes the content to the original content producer.
LLMs and Diffusion models do none of these things. They attempt to pass the copied content, stored inside of the model, as their own work. Again, Bing Chat improves on this a bit, to their credit.
Yes. Many people, repeatedly, including many ongoing lawsuits. It is not technically feasible.Did anybody ask any of the AI companies to remove their content and they refused?
Remove it from the training data. Fucking retard.Yes. Many people, repeatedly, including many ongoing lawsuits. It is not technically feasible.
They cannot remove the content from the model. Think of the model as a vast network of interconnected numbers. No one, not even the creators, understands how any given piece of content or any given idea is stored inside the model. The training run produced that math after putting it through an enormous amount of compute, and it is effectively a black box to even the smartest people on the planet.
But when you ask it to reproduce data that it trained on, it complies and produces a near perfect copy, meaning the original is stored inside there using what is effectively an indecipherable compression algorithm.
Screen grabs are snippets. Fucking retard.
The public display of the training data doesn't happen. Fucking retard.
AI generated screen grabs do not provide acceptable substitutes of the original movies. Fucking retard.
That's all anyone is asking, but it's not technically possible the way the technology is built.Remove it from the training data. Fucking retard.
Obviously they would have to stop using certain data in the training of new models and stop using models that use the training data if such usage was found to violate copyright. Just because someone files a lawsuit doesn't mean they have a valid claim.That's all anyone is asking, but it's not technically possible the way the technology is built.
If you remove it from the training data it does not remove it from the model, aka the current version of the product. They would have to re-run the training and produce an entirely new model anytime anyone issues a takedown request.
OpenAI's training runs are 6-9 months long, require approximately 20,000 datacenter-class GPUs costing $10k each, and consume enough energy to power tens of millions of homes. All of this is used to distill the training data into a hyper-compressed, indecipherable string of numbers containing the original data and the mathematical relationships between every bit of data inside the model.
You are demonstrating that you know nothing about what you're talking about. Again, there are no special math scissors that let you delete content from the model.
That's just your lack of reading comprehension, I already covered this in that same post:Obviously they would have to stop using certain data in the training of new models and stop using models that use the training data if such usage was found to violate copyright. Just because someone files a lawsuit doesn't mean they have a valid claim.
You're still not owning up to trying to pass of an href as the image source while claiming you're a tech expert and we don't "get it."
Got any explanation for that or we just trying to brush that under the rug?
The href shows where the content came from, and links to where you can go find it. It therefore serves as both a reference and an attribution. The img src is a reference to the cache on some CDN somewhere. Content Delivery Networks are absurdly complex and speed up a lot of what we do on the internet every day, but if a content owner wanted their material removed from the CDNs, their engineers could make it happen. The smartest AI developers in the world do not know how to snip content out of a model.So the image you're presented in the index is in fact pulled from the original source. There's some complex caching at the CDN level to speed this up, which is what you see in the img src tag, but the content is pulled from the original source and links to the original source. Further, Google is not presenting this as their own work, unlike an LLM or Diffusion model.
I don't, I support intelligence and reasoned arguments that make sense given the laws we have, which is why I usually oppose you.Why Cad do you support one of the Wokest companies on Earth?
There is no valid legal argument that compressing the original content itself and storing it inside the model, aka OpenAI's product, qualifies as either Fair Use or Transformative Use.
Yes. Just because they've used their giant mountain of money to fend off legal (and cultural) challenges to their business models doesn't mean that said mountain of money wasn't built off of other people's work. (As a bonus, this answer also bypasses the Misting.)So if I google Dune movie screencap and google does in fact show me movie screencaps, is google also guilty of copyright infringement for having those images in their search engine?