AI in the App
Yup. All users were sent an invitation to this morning’s official release at 11am PST.
Anyone who accepted the invite received the URL about 15 minutes ago:
I’m not including the actual link, as the event required an RSVP
Well, that would explain why my GF UI played a sad trombone noise when I opened it this morning.
Ouch. “The company has applied for patents related to its Magic Canvas software, and the team was reluctant to share specific details about which exact latent diffusion neural network it’s using to generate its made-for-laser-engraving AI artworks.”
If it’s based on potentially copyright-infringing datasets, that wouldn’t surprise me in the least. And as much as I love AI image generation’s capabilities, I don’t like businesses practices that exploit and abuse my fellow artists and makers.
I hope Glowforge was smart enough to avoid stepping in that bucket of steaming, brown outrage, but their response does not leave me optimistic.
This was my first concern as well. Do you happen to know if anyone has asked them, and what their response was?
I was not at the reveal, but I was told multiple people asked them and were brushed off. They apparently said it’s “new material”, but skirted the question of whether the AI model was trained on copyrighted works. And they stated that it’s been trained on “millions” of items, which leads me to believe they’re either relying on existing LAION indexes, or scraping the internet with zero regard for copyright.
If so, that’s really galling when you consider they’re marketing this to a creative community, many of whom know firsthand what it’s like to have their work ripped off without permission or compensation. Add to that the fact that there are artists in this community who sell files and images who are now in direct competition with this feature.
This was a shockingly tone deaf decision.
Re: “potentially copyright-infringing datasets,” is litigation actually pending? I don’t know anything about this, just curious…
I can understand the business case, so long as we’re in the wild west of AI / intellectual property law, i.e. better to ask forgiveness than ask permission.
But… gosh that presentation certainly was cheerfully perky and devoid of pesky nuance
That was my thought too, people are going to want to support other artists rather than further support theft of art, so they are going to need transparency. Yikes and yet not surprising at all
Yes, there are several cases working their way through the courts.
Notably, Getty recently sued Stability AI.
And a group of artists sued Stable Diffusion and Midjourney’s parent companies. I personally don’t see that getting very far, as the plaintiffs have already made some sloppy mistakes in their arguments.
One recent development is going to make those cases tougher to defend, though. Researchers were able to reconstitute some of the original photos from Stable Diffusion training data. This could be a huge blow against the argument that AI image generators simply break down and remix content without using the original photos or infringing on copyrights.
Not really a surprise to anyone who’s played around with these tools and had the AI spit out an image with a slightly mangled Getty Images or Shutterstock watermark present.
The watermark issue would be pretty damming there, I’d think.
So that is the thing from Bailey that I kept ignoring since it was during work.
Even with the watermarks it seems to me it is far from clear who may prevail.
Works need to be sufficiently transformative to be kosher. Where is the line drawn? The courts will figure it out and we’re along for the ride.
Maybe not. The argument could be made that the AI was simply reconstructing an approximation of what it thought was part of the picture — it doesn’t have the “sense” to know that the watermark isn’t an integral part of the image. But the researchers who were able to extract whole images from the data… that may be damning. One of the key arguments for the defense is that the training datasets don’t store the images, and that research seems to blow that argument out of the water. One expert described it as “lossy compression”.
Spot on. Legally, everything is up in the air right now, and copyright law as it stands is not equipped to deal with disruptive technologies. Buckle up!
I’ll tell you, I would much prefer a world where it was possible to use works as references very liberally. The alternative, where we’re endlessly litigating how much influence is too much influence, would lock up even more works in corporate vaults.
Well copyright law may not be equipped to deal with this, but I’m sure Congress, in its infinite wisdom, will soon enact clear and practical legislation that protects the interests of artists and creatives while also empowering companies and individuals to realize AI’s fullest potential
I know the legality is all murky and I know next to nothing about the actual technology. But my understanding of it, at the most basic level is, the AI looks at tons of images. Then when asked, it uses the “techniques” in those images (perhaps duplicating the techniques perfectly), but that it doesn’t use actual images, or even portions of images in the finished product. How is that different than a living artist studying artwork and learning how to duplicate a style, or a brush stroke, or a photograph lighting technique?
I have only played around with the AI stuff a little bit, but I don’t think I have ever gotten a finished product that was an exact duplicate (or even close enough to count) of another image. Yes, it has the same style, feel, and look, but so does the finished work of an “original living artist” creating their own “original” painting in an established style?
Just being devil’s advocate here. I really don’t have an opinion yet.
That’s just it - the AI developers say they’re just referencing style but it’s been proven that in many cases they are using actual images and portions of images. It is most damning when a prompt requests artwork in the style of specific artist and/or with a specific artwork title vs something more broad like “impressionistic sunset with bear”.
Then those cases could just be handled the same way as if a living artist copied something, right? Like, a living artist can copy artwork and we punish them for that, but we don’t say stop using living artists? And if a living artist copies an art piece and is punished and then creates new art that isn’t a copy, then that is okay, right? So is the only reason we don’t like AI to do the same exact thing just because they can do it so much faster and more efficiently? Are we saying “No to all AI” because some images are copied? If so, why don’t we say “No to any living artist” because someone forged something?
I am not trying to steal food from artist’s mouths here, but I am trying to understand the difference.
I think you’ve leapt quite a bit too far, I’m not saying ‘no to all AI’ at all.
I’m saying that what the AI developers have claimed is disengenous and needs to be clarified.
They are all taking the tack right now that everything that comes out of their generators is NOT infringing on anyone else’s work.
For a real world example of using functional artwork: The US Library of Congress Digital Collections contains many works listed as “public domain” and how they entered public domain but also many that are specifically listed as not - they clearly state that indemnification is up to the end user to obtain and, where possible, they will point to the known rights holder.