So it would be okay, as long as no full works were copied (or what would constitute a full work)? Like, if a human wouldn’t be punished for it, then the AI shouldn’t either? That seems fair. But again, don’t we have to wait until it actually DOES that to punish it. (And for those that have already done so, like in that article you cited, then yes.) So, until GF’s Magic Canvas actually recreates a protected work, then we should not be opposed to it. But when it does, then we can treat it the same as any other forger.
This has always been the sticky wicket of legal clearances and copyright. At what point is a work transformative? There’s a longstanding myth that a lot of graphic professionals still subscribe to - that there’s a magic percentage (50%, 75%?) that makes something ‘ok’. Legal professionals will tell you there’s no such thing.
Look at it another way - Shutterstock, one of the largest major stock photo providers, has launched their own AI generator. The difference is that their source training set is their own library of images that they already hold the rights to. Therefore they can confidently say “The imagery that trains Shutterstock.AI is from our own contributors. This means it’s ethically sourced, totally clean, and extremely inclusive.” The artists who submitted their works to Shutterstock’s library agreed their work could be sold, remixed, reused, etc according to the specifics of their contract (there are differences between editorial and creative licenses, for example).
Stable Diffusion, Midjourney, and the others with pending litigation scraped the internet - their training set source artists never agreed to this usage.
That is cool, about Shutterstock. Respect. But, IRT agreeing to usage. Do those same artists have the same claim if a human sources their image to “learn” how to be a good artist? (Obviously, we aren’t talking about transformative or not here, I agree that whatever is the standard should be the standard) But say it is just the “technique” the AI sources. Do those non-agreeing artists hold the same standard for humans learning to paint.
I am not art-y. My brother is art-y and he has looked at hundreds of paintings and tutorials online on how to do water color painting. Obviously, the tutorials are fair game, but if he looks at a watercolor painting and tries to duplicate the feel of a brush stroke, isn’t that the same thing an AI (again, assuming the AI isn’t doing that transformative/not unethical part) isn’t that the same thing?
I apologize for hijacking the thread. I am just trying to gather input for making an opinion.
You’ve hit upon the problem :
They say they’re not but they obviously are in some cases - so what is really happening in that AI generator?
Consider this case: Hypothetical you posts pics of your toddler on FB. Random artist uses AI generator “cute little girl playing in a field, watercolor style” and gets an image that clearly has your toddler’s face even though the rest of the image is totally different than anything you posted. Random artist uses this image as the key art for a widespread campaign and you start seeing your toddler’s face everywhere. No one asked you, no one compensated you. The campaign advertises a product that you and your toddler would never support. How do you feel about that?
(regarding your brother: if he paints a thing and tries to pass it off as the original that is forgery of course. If he paints a thing with his own brush and paints then he painted a thing. If he scanned and digitally collaged parts of other people’s watercolors that is image manipulation, not creating new brush strokes. What we don’t really know is what the AI generator is doing)
And I think that is the heart of the problem. An AI stealing a face is exactly that, bad, but it is the same as a human artist stealing that face and shouldn’t be treated any differently. But if you tell it to paint a Las Vegas Strip in the style of Van Gogh, and it uses the same swirly strokes as he did, then that is no different than if my brother did it. Both of them would have looked at Van Gogh works and imitated those types of strokes. But if I said paint a night sky with stars in the style of Van Gogh, and the AI or my brother painted Starry Night, then that is naughty. And both can be treated the same way. In either case, Van Gogh did not consent or agree to either one sourcing his work, but it is okay, unless the threshold (whatever it is) is met to claim forgery. And no one is telling my brother to stop looking at paintings and learning, but they ARE telling AI systems to stop doing that. And I still kind of think that, until the AI is actually “proved” to exceed that “threshold”, then it should get the same benefit of the doubt.
It’s not that much of a mystery. how do diffusion models work - Google Search
I would feel exactly the same about that if a human produced the picture as I would if an algorithm did it. You’re talking about the output, not the input. If a random number generator produces an exact replica of a copyrighted Disney character, with no training or ingestion of Disney property, it’s still an infringement. Conversely, if you feed a bunch of Disney movies into a shredder, reduce them to their component molecules, and smear the resulting flat grey paste onto a canvas, are you infringing their rights?
Better example IMO would be that an AI generator is trained on hundreds of thousands of faces, including the one you posted on FB, in order to be able to distinguish between what looks like a face and what doesn’t. It then takes a picture of static and repeatedly tweaks the pixels until they get a higher and higher “looks like a face” score. The resulting picture does not match any of the images that it was trained on but is recognizable as a human face. How do you feel about that? Because that’s what happens the vast majority of the time. The article you linked even says “Their team tried out about 300,000 different captions, and only found a .03% memorization rate. Copied images were even rarer for models like Stable Diffusion that have worked to de-duplicate images in its training set”.
I think this is the wrong focus. If the models are producing outputs that look too similar to their training data, that’s a problem to be solved. I don’t think it’s obvious from there that the ingestion of the training data is itself the problem. The analogy of a human artist studying existing artwork to learn has been made many times, and I think it’s an apt one. My opinion is that tt’s what you do with it that matters.
Although that would not be copyright as opposed to Trademark infringement.
Now THIS is a modern art exhibit I want to see before it becomes illegal! Where can I buy tickets?
This is quotable…I love it! Though tongue in cheek, it’s a brilliant example of the subject of all this furor.
Damnit, they steal my ideas and send them back in time.
This topic was automatically closed 32 days after the last reply. New replies are no longer allowed.