AI art is facing a copyright problem. Here’s what it means for creators

By: Verbit Editorial

Mickey Mouse copyright header
Filters

Filters

Popular posts

Adding Subtitles in DaVinci Resolve Adding Subtitles in DaVinci Resolve
instagram-logo-1
Adding Captions To Instagram Reels & Videos Adding Captions To Instagram Reels & Videos

Related posts

Woman working on a laptop
AI’s growth in the legal sector provides improved efficiencies, fresh insights and new analytics AI’s growth in the legal sector provides improved efficiencies, fresh insights and new analytics
student at laptop
Hear how colleges and universities are preparing for new Title II rules and enhancing digital accessibility Hear how colleges and universities are preparing for new Title II rules and enhancing digital accessibility
Share
Copied!
Copied!

This summer, X (formerly Twitter) introduced its subscribers to Flux, a cutting-edge artificial intelligence image generator developed by Black Forest Labs. Soon after, the platform became saturated with images of celebrities and fictional characters created by AI.

Some entertainment and gaming industry titans were not happy. Disney and Nintendo, two companies with a track record for aggressively defending their intellectual property, place strict limits on using their characters and images. That left X users wondering how the companies would respond to the hundreds of millions of high-quality images of their favorite princesses and Italian plumbers firing guns or hobnobbing with controversial political figures.

Still, it’s not just major corporations that must contend with the nonconsensual use of their material. The recent explosion of AI has raised a whole host of copyright concerns for creators of all types, from influential global animation studios to individual artists with modest followings. The questions of who determines the ownership of AI-generated images and whether creators — whose work is used to train AI models — should share the profits of an image or text generated using their creative output loom large.

While media outlets, publishers, network studios and other companies worldwide that own valuable intellectual property are gearing up to fight the tech companies in court, individual creators — illustrators, musicians, authors — who lack the corporate muscle to defend their interests may face a different battle.

Verbit examined news articles and legal research to see what the rise of AI means for creators.

A cartoon gloved Mickey Mouse hand shaking hands with a cartoon human hand

Legal challenges loom for content created with AI

As AI pushes the boundaries of what’s considered fair use, critical questions are being hashed out in court. One of the central tensions in several recent lawsuits is determining what constitutes fair use. When evaluating whether copyrighted material has been used without authorization, US courts consider several factors, such as whether the new work is commercial or nonprofit, whether it is creative or factual, how transformative it is and what economic impact it has on the copyright owner.

For example, sharing clips from news articles or movies for purposes of commentary or critique is considered fair use, as is publishing parodies of copyrighted works. In contrast, sharing an entire article or movie without authorization or any added commentary would not be fair use.

The New York Times, for example, is one of several news publishers suing OpenAI for copyright infringement. The company trained its chatbots on millions of articles from the Times, which now arguably compete with the newspaper. OpenAI, however, defends its models as transformative, and thus protected under fair use laws.

While the lawsuit is still pending, a similar argument succeeded in the 2015 Authors Guild lawsuit against Google Books. In that instance, the court ruled that Google Books, an online service providing searchable excerpts from millions of books, was transformative enough to fall under fair use, despite publisher objections that the service unfairly used their content.

Another prominent case involves a group of artists who sued several tech companies known for their text-to-image models for copyright infringement. The plaintiffs allege that the generative AI companies used their copyrighted work unauthorized to train their AI models, thereby enabling them to generate work in the styles of particular artists when prompted. The lawsuit also contends that Midjourney, another AI image generation company, once shared a list of 4,700 artist names, including some of the artists’ work, whose work their programs could imitate.

The yet-to-be-determined outcome of the case could have major implications for artists. If companies and consumers can use image generation tools to create an image almost instantly for fractions of a penny without the fear of legal action, they might ultimately decide that hiring humans to draw for them is simply too costly. This is doubly true if people can generate art that perfectly matches the styles of their chosen artists.

Cartoon image of gavels flying through the air

Looking to past fair use cases as precedent for the future

One possible path forward for companies and major content producers wanting to protect their intellectual property rights would be for AI tools to sign licensing arrangements with them. OpenAI has already agreed to sizable deals with numerous publishers, including Condé Nast, Time, News Corp., Axel Springer, the Financial Times, the Associated Press and others, agreeing to pay publishers millions of dollars for the rights to use their work over the next few years. Another pathway is being tested by Perplexity, an AI-driven search engine, which recently launched a revenue-sharing program.

This is not without precedent. In 2006, when YouTube was still in its infancy, record labels threatened legal action when music sharing boomed on the platform. Eventually, YouTube signed a licensing deal with producers, offering them a share of ad dollars in exchange for music sharing.

But while major news publishers and record producers might have the legal resources to sign deals with tech companies — and profit from their own work — individual artists, musicians, publishers and other creators are often not as fortunate, making them easy targets for copyright issues.

Smaller YouTube creators often say they are falsely accused of copyright violations for sampling snippets of lyrics or song covers in educational videos. When a big music label issues a copyright claim, individual creators often have little recourse due to the narrow scope of fair use laws. They can either take down their video or pay the label a portion of their ad revenue — a crucial stream of revenue that keeps creators afloat.

Exactly how thorny copyright and fair use issues will play out as AI evolves is still unknown. However, as more people use generative AI to produce text, images and videos, ambiguous cases will likely arise. It will be up to courts and legislators to define fair use, but that alone may not be enough to protect smaller creators who lack the resources to defend their work or secure deals with tech companies.

By Wade Zhou