Tech
Artist And Activist Karla Ortiz On The Battle To Preserve Humanity In Art
A darkly humorous meme currently circulating online shows an artist lamenting “I was promised that technology would help me balance my checkbook and do my laundry so I would have time to do my art and writing, not technology that does the art and writing so I have more time to balance my checkbook and do my laundry!”
For the otherwise good-humored Karla Ortiz, a fine artist and illustrator who has worked on projects for Marvel Studios, Ubisoft, Industrial Light & Magic since the early 2010s, the situation is no laughing matter. In 2022, she first became aware of websites using AI generated imagery. It intrigued her at first, then alarmed her as she discovered how the generative models were trained, using billions of images “scraped” from the Internet without regard for copyright, consent or compensation.
She began convening town hall-style meetings to educate the arts community about the threats she saw generative AI posing to professional artists in every corner of the industry. These meetings helped artists organize responses to the rise of technologies pushed by some of the biggest and wealthiest companies in the world.
In January, 2023, Ortiz became a named plaintiff, along with cartoonist Sarah Andersen and illustrator/fine artist Kelly McKernan, in a series of class action suits against the companies behind generative AI art models, including Midjourney, Stability AI, and the online artist community Deviant Art. On May 8, the presiding judge in the US District Court of Northern California issued a preliminary ruling that “I am inclined to deny all motions to dismiss… infringement claims,” meaning the case will likely move forward, but a final ruling is still pending.
Ortiz later testified before the US Senate Judiciary Committee’s subcommittee on intellectual property and copyright, advocating for stricter regulation over the activities of tech companies.
I spoke with her by Zoom in May, 2023 for a feature story on emerging technologies to help protect human artists against the abuses of generative AI. Her substantial and informative comments on the state of play have been lightly edited for length and clarity.
ROB SALKOWITZ: When did you first become aware of generative AI systems?
KARLA ORTIZ: Around April or March 2022. I came across a website called Weird Wonderful AI, which had a list of artist studies. The list included friends of mine and people I knew, which was strange. I reached out to some friends whose names I recognized and found out they weren’t involved. The site was selling merchandise and NFTs that looked suspiciously like the artist studies they featured, and I thought that was messed up since they obviously didn’t want to be a part of it. It was small and weird, so I didn’t pay much attention to it initially.
Fast forward to August and September, and we had Midjourney, Stable Diffusion, and DALL-E. I got curious and found out these models were built off datasets that included almost all my fine artwork and commercial work, and that of many peers. I did a little research and found out these AI models were trained using our work without permission. I also discovered people were using our names to generate similar imagery. I got really, really upset.
My boyfriend, who was also exploring AI, tried generating some images, and it was scary how good it was, that this could end up taking away jobs, especially from people just starting out. This led me to take action. I organized a panel with the Concept Art Association and invited AI ethics experts. They were shocked at how unethical it was, which validated our concerns.
Have you tried using generative AI yourself?
I tried DALL-E, thinking it might be better because you couldn’t put in artist names. But it wasn’t useful for my needs. It generated mismatched, generic imagery. Later, I realized all these models are similar and likely trained on the same data, including our work.
If you were using AI in your own practice, could it help with revisions or speeding up work? That’s something the vendors claim as a benefit.
No, not at all. They sell it that way, but ethics aside, even if there were an “ethical model,” the tech itself is flawed. AI has a hard time with revisions For instance, an art director might ask for changes that the AI can’t handle properly. It doesn’t understand the underlying logic of the image, so it’s terrible for things like armor or equipment, where the form follows function. I can do it faster myself.
High-quality companies with big budgets and strict standards find AI useless for revisions. The horror stories mostly come from some mid-sized or smaller companies that lack resources and sometime prioritize quick, cheap results over quality.
Like needing a quick book cover or poster?
Exactly. That’s work that used to provide opportunities, especially in lean times like during strikes. Another example is pitch work, which is how a lot of people get work. With the rise of generative AI, pitch work is mostly dead. This shift has been devastating. I was at an industry event and it was all anyone talked about. Pitch work was essential during slow periods or strikes, and it’s now largely gone. This issue isn’t well-studied, and the impact on freelancers is often overlooked because the numbers are all about full-time jobs. A lot of us are freelancers — something like 30% compared to 7% in other industries — so the financial losses don’t show up as unemployment. There is just less and less work, even for more senior people.
Have you seen job losses in the industry due to this, especially in commercial areas like storyboards or character designs?
Yes, I’ve been involved in productions where generative AI was used. I’m cautious about speaking too much because it’s a small industry, but I can say that in these projects, my role was reduced. For example, character design jobs that could have been mine were done by AI, or I got some stuff on a project that had already been generated by AI that could have been billable hours for me. It has already affected my income.
Have industry associations stepped up?
Yes, the Concept Art Association, the Animation Guild, and a few other really wonderful artist organizations got together and produced a report based on a survey of like 300 C-level execs, teams and all. It covers all forms of entertainment companies – films, games, news, everything – and just to get their opinions on it and what they’ve actually done with generative AI. And it was pretty intense. Some of the things they were saying, for example, they expect that by 2026, approximately 200-300,000 jobs would be affected in the US, consolidated between California, New York, Georgia, and Washington. I’m not surprised. Some of us are experiencing the worst time in our careers. I’m lucky because I diversified and I have lots of clients and now I’m at a stage in my career where I’m known for my quality, and even for me, it’s still been hard.
is there some equivalent to SAG or Writers Guild or something like that on the commercial arts side? Because those guys went to the table, they went hammer and tongs on this issue, during the strikes last summer.
Yes and no. The problem that we have is that concept artists and artists like myself, illustrators, the folks that are visual artists in the industry, we’re split between two unions, the Costumers Guild and the Art Directors Guild. The other problem is that the bosses and the workers are in the same unions. So you have a situation where some bosses who don’t fully get what this is, or the ethical and legal implications of these models, think this could be a quick way to do what they want without asking anybody else to do it. So they kind of want to use it.
For a while, there’s been some discord back and forth and internal conflict about whether to go to bat for this. I think they’re starting to get the memo. I think the Art Directors Guild in particular is starting to realize that they need to stand by their membership and their artists. I haven’t heard from the Costume Designers Guild, but I hope everyone involved in the unions recognizes that this is coming for everybody.
I think a lot of people got that wake-up call when OpenAI released Sora. Because it’s no longer like, oh, it’s just an image or an actor or a script. It affects everyone in the crew. With Sora, you can see it could potentially say goodbye to commercials, VFX shots, everything. That’s art directors, costumes, makeup, cinematographers, gaffers, everybody.
When did you decide legal action was necessary?
After speaking with experts, it was clear we needed to set a legal precedent. I, along with two others, created a document listing everything we thought was wrong and shopped it around. Eventually, I found the Joseph Saveri Law Firm through a Vice News article about their lawsuit against GitHub for their Copilot feature. We met in October 2022, and they agreed to represent us.
I connected with other artists like Kelly and Sarah, who were also angry about this. Even an IP lawyer who initially dismissed our concerns later admitted we might have a case. Class action was the way to go because none of us had the funds to pursue this individually.
In addition to the legal actions, when did you start getting involved in technological countermeasures like Nightshade?
It started at the second Concept Art Association town hall. During that meeting, Ben Zhao from the University of Chicago asked how he could help. A couple of weeks later, he emailed me to demonstrate a tool they developed called Glaze. I was impressed and offered to help in any way, including getting artists to survey it. Up until that point, I didn’t feel comfortable sharing my work online anymore because people were stealing it.
I had confronted some CEOs on Twitter and had numerous spats with AI proponents who threatened to make models off my work. When Ben showed me Glaze, it felt like a lifeline. It gave me hope that I could post my art online again without fear of it being stolen. Seeing his work was an emotional moment for me because it meant I could reclaim my artistic space on the internet. This is crucial for artists to get jobs, connect with audiences, and share their work with the community.
Without offering it up to be sampled and violated by everyone.
Exactly. There was no consent involved. LAION [the LAION 5B dataset that is the basis for many of the popular image generation tools], for instance, took work directly from my website. The majority of my fine art work was third-party published on Pinterest by people I didn’t know. Artists were asking how to share anything online safely. Ben’s work on Glaze provided a solution. He was concerned about the UI, but I reassured him that our community could help. We found an art director who improved it, and I had the honor of creating the first public painting using Glaze. That painting was “Musa Victoriosa,” an oil painting I did.
What do you have to say to the people who defend this, saying it opens up opportunities for creativity and visual skills they’ve wanted to express but never had the talent for?
I tell them they should be angry. They should be angry at the companies. These products could have been built on public domain only. That’s work that belongs to everybody, and any expansion upon that could have been done by actually asking artists and doing licensing agreements. Whether I agree with that or not is a different thing, but it could have been done ethically. Instead, these companies told users to use a product riddled with theft and legal issues, and if their model accidentally generates something that infringes on copyright, it’s not their fault; it’s the user’s fault because they prompted it. That’s not how it works. So I don’t understand why people defend something so unethical when it could be ethical. It’s disgusting. They should demand more from these companies.
What about the people who say, well, AI is coming whether we like it or not, so might as well get used to it?
Theft happens, so what are we going to do about it? Murders happen in the street. It’s a human condition to want to murder. Why should we do anything about it? No, that’s an excuse! If this supposedly revolutionary technology can only exist by trampling over laws and people’s rights, is it really that revolutionary? You’re going to fall behind what? What race are we on? There has to be a better way to approach this, a better way to visualize what this technology could be without disregarding copyrights, consent, privacy, livelihoods and ownership of one’s work.
What can people do, considering the huge amount of money and influence behind the spread of generative AI right now?
A lot. I was known in my industry before all of this, but most people don’t know who I am. But I still got out there and talked to the media, wrote about it, educated people, and tried to educate myself. An individual can educate themselves, recognize the ethical issues, and realize that it’s not just affecting artists but potentially anyone whose work is digitized. Contact lawmakers, demand better practices from companies, and make a stink about it. When enough people do that, it changes things. From 2022 to now, the narrative has shifted because people spoke out.
People can also hit regulatory agencies, like the FTC, and say they are not okay with these practices. Making a big stink about it does work.