Allegro

Art & A.I. — Who Owns Creativity?

Volume 125, No. 6June, 2025

Harvey S. Mars

This is one of two articles in this month’s Local 802’s A.I. series. The article below is by Local 802 Recording Vice President Harvey S. Mars, the officer representative to the Local 802 A.I. Committee. See also Ken Hatfield’s article elsewhere in this issue

Harvey Mars with Katie Wagner, the executive director of Volunteer Lawyers for the Arts

If A.I. “ingests” your art and creates a new piece that in part is based upon it, is that considered fair use? In late May, I attended a symposium sponsored by the Volunteer Lawyers for the Arts (VLA) devoted to legal developments concerning generative artificial intelligence, including exploration of how the fair use doctrine will be applied. The panelists at the symposium were three well known intellectual property attorneys, and the moderator was Katie Wagner, VLA’s executive director.

Understandably, the panelists posed more questions than answers since this topic is one that is currently evolving and dependent upon answers that are anticipated in ongoing litigation defining the contours of fair use. If utilization of copyrighted material is deemed a fair use, it loses its copyright protection, and its unlicensed use will be permitted..

Fair use analysis involves consideration of four factors that are codified in Section 107 of the Copyright Act of 1976. They are:

  1. The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes.
  2. The nature of the copyrighted work.
  3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole.
  4. The effect of the use upon the potential market for, or value of, the copyrighted work.

These factors require an individual consideration specific to the facts of the case. The panelists noted that one particular factor that may be determinative of the analysis is the fourth one in the list above: the effect of the use upon the potential market for, or value of, the copyrighted work.

Generative artificial intelligence algorithms that produce new works from a model that relies upon copyrighted works necessarily impact the market and value of those copyrighted work. Works produced by generative A.I. will adversely affect marketability and licensability of works that are non-A.I. produced.

The panel remarked that this was one of the conclusions reached in a May 2025 report issued by the former Register of Copyrights, Shira Perlmutter, on generative A.I. training. A copy of this 108 page report is available here. A synopsis of the report can also be found here.

The report notes that:

“The copying involved in A.I. training threatens significant potential harm to the market for or value of copyrighted works. Where a model can produce substantially similar outputs that directly substitute for work in the training data, it can lead to lost sales. Even where a model’s outputs are not substantially similar to any specific copyrighted work, they can dilute the market for works similar to those found in its training data, including by generating material stylistically similar to those works.”

And:

“Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

It is worth noting that after this report was issued, President Trump fired Shira Perlmutter. This appears to be a rather brazen act of retaliation against Ms. Perlmutter for having issued a report whose conclusions were contrary to Trump’s political and financial agenda.  (The AFM posted this statement in response to her firing.)

Given Trump’s reaction to the copyright office’s report on generative A.I., federal law may not be a welcoming avenue for copyright protection for works used to train generative A.I. models. The panel commented on the possibility that state statutory or common law, such as the right of publicity, which has been useful to control the proliferation of “deep fakes” might be useful to regulate generative A.I. Currently there are 48 states that have A.I. legislation pending and 26 that have already adopted legislation. 

However, the hope that state legislation will be a panacea could be very short lived. Several weeks ago, the U.S. House of Representatives passed a budget bill that, if enacted, would place a 10-year moratorium on enactment of state legislation regulating artificial intelligence. The section of this legislation concerning A.I. (Artificial Intelligence and Information Technology Modernization) states: “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act.”

At present it is unclear whether this legislation will survive Senate consideration because it is included in a budget reconciliation bill. As detailed below, there are rules against this. Regardless, swift action should be taken to prevent the enactment of this portion of the bill.

SO WHAT CAN BE DONE TO STOP IT?

As reported in a May 15, 2025 article published in The Trichordist: The A.I. “safe harbor” [i.e. the prohibition on states regulating A.I.] is not a done deal, even though it appears in the House version of the budget. There are several procedural and political tools available to block or remove it from the broader legislative package.

  • Senate “Byrd Rule” challenge and holds: Because reconciliation bills must be budget-related, the Senate parliamentarian can strike the safe harbor if it’s deemed “non-germane,” which it certainly seems to be. Senators can formally raise this challenge.
  • Conference committee negotiation: If different versions of the legislation pass the House and Senate, the final language will be hashed out in conference. There is still time to remove the moratorium here.
  • Public advocacy: Artists, parents, consumer advocates, and especially state officials can apply pressure through media, petitions, and direct outreach to lawmakers, highlighting the harms and democratic risks of federal preemption. States may be able to sue to block the safe harbor as unconstitutional but let’s not wait to get to that point.

However, assuming this safe harbor law survives procedural challenges in the Senate, it may be ruled in the courts as an unconstitutional overbroad restriction on states’ ability to protect its citizens consumer rights and civil liberties (police powers). We will be following this very closely.

Finally, the panel considered probably one of the most contentious aspects of fair use in the realm of A.I.-generated art which is the question of transformative use. When an A.I. algorithm processes existing artworks to create new pieces, is this transformation sufficient to qualify as fair use? This remains a murky area, with ongoing debates about the balance between innovation and the rights of original creators.

Nonetheless, it seems that the best point in time to protect artists’ copyright protection is at the ingestion level, where the A.I. technology is first being trained. One way to protect copyrighted works is through a statutory system of compulsory licensing that requires compensation to copyright holders if their works are used in any aspect of generative A.I.

Needless to say, there will be lots more to share in this rapidly developing area. Local 802 and its A.I. Committee will provide timely updates. Attending this symposium was critical to this goal.

If you are interested in the Local 802 A.I. committee, please send an e-mail to Local 802 Recording Vice President Harvey Mars at hmars@local802afm.org and A.I. Committee Chair Jerome Harris at jeromeharr@aol.com.  Send feedback on Local 802’s A.I. series to Allegro@Local802afm.org.


OTHER ARTICLES IN THIS SERIES:

VICTORY FOR HUMAN CREATORS IN A.I. COURT CASE

Defining the Magic

Watermarking and Fingerprinting Explained

A.I. LEGAL UPDATE

A FEW WAYS A.I. COULD BENEFIT MUSICIANS

The TRAIN Act is a good start in protecting musicians from A.I. exploitation

Case Tracker: Artificial Intelligence, Copyrights and Class Actions

BIG MUSIC AND A.I.

So how does A.I. actually work?

Protecting musicians from the existential threats of artificial intelligence

A DEEP DIVE INTO HOW A.I. AFFECTS MUSICIANS

“It all sounds the same”

“Artful” Intelligence

“How are you going to stop AI from stealing our jobs?”

Unleashing Creativity