AI in the App
Yup. All users were sent an invitation to this morningās official release at 11am PST.
Anyone who accepted the invite received the URL about 15 minutes ago:
Iām not including the actual link, as the event required an RSVP
Well, that would explain why my GF UI played a sad trombone noise when I opened it this morning.
Ouch. āThe company has applied for patents related to its Magic Canvas software, and the team was reluctant to share specific details about which exact latent diffusion neural network itās using to generate its made-for-laser-engraving AI artworks.ā
If itās based on potentially copyright-infringing datasets, that wouldnāt surprise me in the least. And as much as I love AI image generationās capabilities, I donāt like businesses practices that exploit and abuse my fellow artists and makers.
I hope Glowforge was smart enough to avoid stepping in that bucket of steaming, brown outrage, but their response does not leave me optimistic.
This was my first concern as well. Do you happen to know if anyone has asked them, and what their response was?
I was not at the reveal, but I was told multiple people asked them and were brushed off. They apparently said itās ānew materialā, but skirted the question of whether the AI model was trained on copyrighted works. And they stated that itās been trained on āmillionsā of items, which leads me to believe theyāre either relying on existing LAION indexes, or scraping the internet with zero regard for copyright.
If so, thatās really galling when you consider theyāre marketing this to a creative community, many of whom know firsthand what itās like to have their work ripped off without permission or compensation. Add to that the fact that there are artists in this community who sell files and images who are now in direct competition with this feature.
This was a shockingly tone deaf decision.
Re: āpotentially copyright-infringing datasets,ā is litigation actually pending? I donāt know anything about this, just curiousā¦
I can understand the business case, so long as weāre in the wild west of AI / intellectual property law, i.e. better to ask forgiveness than ask permission.
Butā¦ gosh that presentation certainly was cheerfully perky and devoid of pesky nuance
That was my thought too, people are going to want to support other artists rather than further support theft of art, so they are going to need transparency. Yikes and yet not surprising at all
Yes, there are several cases working their way through the courts.
Notably, Getty recently sued Stability AI.
And a group of artists sued Stable Diffusion and Midjourneyās parent companies. I personally donāt see that getting very far, as the plaintiffs have already made some sloppy mistakes in their arguments.
One recent development is going to make those cases tougher to defend, though. Researchers were able to reconstitute some of the original photos from Stable Diffusion training data. This could be a huge blow against the argument that AI image generators simply break down and remix content without using the original photos or infringing on copyrights.
Not really a surprise to anyone whoās played around with these tools and had the AI spit out an image with a slightly mangled Getty Images or Shutterstock watermark present.
The watermark issue would be pretty damming there, Iād think.
So that is the thing from Bailey that I kept ignoring since it was during work.
Thanks!
Even with the watermarks it seems to me it is far from clear who may prevail.
Works need to be sufficiently transformative to be kosher. Where is the line drawn? The courts will figure it out and weāre along for the ride.
Maybe not. The argument could be made that the AI was simply reconstructing an approximation of what it thought was part of the picture ā it doesnāt have the āsenseā to know that the watermark isnāt an integral part of the image. But the researchers who were able to extract whole images from the dataā¦ that may be damning. One of the key arguments for the defense is that the training datasets donāt store the images, and that research seems to blow that argument out of the water. One expert described it as ālossy compressionā.
Spot on. Legally, everything is up in the air right now, and copyright law as it stands is not equipped to deal with disruptive technologies. Buckle up!
Iāll tell you, I would much prefer a world where it was possible to use works as references very liberally. The alternative, where weāre endlessly litigating how much influence is too much influence, would lock up even more works in corporate vaults.
Well copyright law may not be equipped to deal with this, but Iām sure Congress, in its infinite wisdom, will soon enact clear and practical legislation that protects the interests of artists and creatives while also empowering companies and individuals to realize AIās fullest potential
I know the legality is all murky and I know next to nothing about the actual technology. But my understanding of it, at the most basic level is, the AI looks at tons of images. Then when asked, it uses the ātechniquesā in those images (perhaps duplicating the techniques perfectly), but that it doesnāt use actual images, or even portions of images in the finished product. How is that different than a living artist studying artwork and learning how to duplicate a style, or a brush stroke, or a photograph lighting technique?
I have only played around with the AI stuff a little bit, but I donāt think I have ever gotten a finished product that was an exact duplicate (or even close enough to count) of another image. Yes, it has the same style, feel, and look, but so does the finished work of an āoriginal living artistā creating their own āoriginalā painting in an established style?
Just being devilās advocate here. I really donāt have an opinion yet.
Thatās just it - the AI developers say theyāre just referencing style but itās been proven that in many cases they are using actual images and portions of images. It is most damning when a prompt requests artwork in the style of specific artist and/or with a specific artwork title vs something more broad like āimpressionistic sunset with bearā.
Then those cases could just be handled the same way as if a living artist copied something, right? Like, a living artist can copy artwork and we punish them for that, but we donāt say stop using living artists? And if a living artist copies an art piece and is punished and then creates new art that isnāt a copy, then that is okay, right? So is the only reason we donāt like AI to do the same exact thing just because they can do it so much faster and more efficiently? Are we saying āNo to all AIā because some images are copied? If so, why donāt we say āNo to any living artistā because someone forged something?
I am not trying to steal food from artistās mouths here, but I am trying to understand the difference.
I think youāve leapt quite a bit too far, Iām not saying āno to all AIā at all.
Iām saying that what the AI developers have claimed is disengenous and needs to be clarified.
They are all taking the tack right now that everything that comes out of their generators is NOT infringing on anyone elseās work.
For a real world example of using functional artwork: The US Library of Congress Digital Collections contains many works listed as āpublic domainā and how they entered public domain but also many that are specifically listed as not - they clearly state that indemnification is up to the end user to obtain and, where possible, they will point to the known rights holder.