@kevin2kelly@glichfield@FlyingTrilobite@neilturkewitz@WIRED@sciam 1/ Mr. Kelly you are operating on assumptions. You claim that "AI has not harmed actual people", which is entirely speculation and not facts. So let's talk facts, and with the evidence you ask for. Let's begin, and please be patient, it's going to be a bit long 🧵👇:
@kevin2kelly@glichfield@FlyingTrilobite@neilturkewitz@WIRED@sciam 2/ A quick intro to who am I. I am a professional artist of over 15 years working in the film industry (Marvel, HBO, Universal etc), and have been an advocate for my community, raising alarms on the unethical practices of AI companies, and how it harms our community.
@kevin2kelly@glichfield@FlyingTrilobite@neilturkewitz@WIRED@sciam@atg_abhishek 7. Data Laundering, From Research to Commercial:
Stability AI, Midjourney and even Google have utilized various datasets from LAION in their commercial models. This is surprising since LAION is supposed to be a non-commercial research data set (cont.)
@kevin2kelly@glichfield@FlyingTrilobite@neilturkewitz@WIRED@sciam@atg_abhishek 9/ Our full names used, potential for Privacy violations: To generate media through AI/ML models, users have to input prompts telling the software what to generate. Artists' full names are commonly used—in fact, encouraged to be used—as part of those prompts (cont.)
1/ I really hope what @Adobe claims is true and if it were it would be a good step in the right direction. However after some cursory digging I have serious questions. Lets dig in 🧵
2/ Right off the bat, have Adobe Stock contributors given their explicit full consent to be a part of this? Did they agree to opt in? Why is there no option for Adobe contributors to not opt out? This seems concerning to me.
3/ The Firefly FAQ mentions training data. So far the explanation is that the model is trained on Adobe Stock, openly licensed work and public domain content with expired copyright. What exactly does openly licensed work mean? There must be complete transparency here.
1/ This might be the most important oil painting I’ve made:
Musa Victoriosa
The first painting released to the world that utilizes Glaze, a protective tech against unethical AI/ML models, developed by the @UChicago team led by @ravenben. App out now 👇
2/ This painting is a love letter to the efforts of this incredible research team and to the amazing artist community. This transformational tech takes the first of many steps, to help us reclaim our agency on the web, by making our work not be so easily exploited. Detail shots:
3/ So how does Glaze work?
@ravenben describes:
“Glaze analyzes your art, and generates a modified version (with barely visible changes). This "cloaked" image disrupts AI mimicry process.” Quote tweet below.
1/ As I learned more about how the deeply exploitative AI media models practices I realized there was no legal precedent to set this right. Let’s change that.
2/ I am proud to be one of the plaintiffs named for this class action suit. I am proud to do this with fellow peers, that we’ll give a voice to potentially thousands of affected artists. I'm proud that now we fight for our rights not just in the public sphere but in the courts!
3/ I am in awe of the excellent, efficient and detailed work of Joseph Saveri, Cadio Zirpoli, Travis Manfredi at the Joseph Saveri Law Firm and Matthew Butterick! They’ve brought up a hell of a case! I cannot wait to see them in action. We couldn’t have chosen better fighters!
1/ This. It’s all a distraction from Ai companies and ai advocates to divert focus on the real issue: that this tech is exploitative and it relies on ill-gotten data, of artists and the public, to train for profit models without our knowledge, or consent. Essentially theft
2/ They’ll squirm and try to divert in any way possible. So you’ll hear cries on why it’s actually ok for Ai media to be exploitative because “it’s Art!”. But if it is art, it’s stolen art.
“This is ableist” they’ll cry, while ignoring how this deeply affects disabled artists
3/They’ll say “Elitist!” to artists who struggled and worked their ass off to (barely) make it, all while AI corporations (some led by elite ex hedge fund managers) exploit the hard work and data of hundreds of thousands of working artists, artists families and the public.
1/ A reminder to Mr. Kelly (one of the founders of @wired), that a month ago he was explained in full detail how AI is harmful to artists+the public. Instead he ignored it all to continue the objectively untrue narrative that harm is imaginary. Read here:
2/ Important to note we learn about awful new things almost on a daily basis now. Like for example when @samdoesarts work was turned into ai models, and a company called civitai emailed him to mock him, just a couple weeks ago.
1/ Deeply embarrassing response to artists protesting for their rights from @ArtStationHQ.
This language is so contradictory. No artists got to decide or choose how THEIR work was used, when AI companies scraped their work from sites like Art Station (cont.)
2/ No artists got to decide or choose when AI users use their full names to generate imagery based on their work. (also sly move there Art Station, lumping AI users with your artist userbase...)
Furthermore by citing they don't want to become a "gatekeeper" (cont)
3/ with site terms that stifle AI research and commercialization, is Artstation signaling it won't protect users data from being scraped by companies such as Unstable Diffusion? Because that is neglectful, and basically stating your art isn't safe there.