Volume 124, No. 7July, 2024

Jerome Harris

This month’s submission for Local 802’s A.I. series is by Jerome Harris, a member of Local 802 since 1979.

When thinking about artificial intelligence from the perspective of those for whom music making is a livelihood, it’s useful to note a distinction. There are various forms of AI; the one that is most concerning for workers in creative fields is generative AI. This technology can assemble (“generate”) new material based on the patterns it finds in old material. This involves feeding vast amounts of digitized data — including recorded music — into powerful computer systems designed to analyze and isolate patterns. This is called “training” these AI systems. The systems can then be ordered to fabricate recordings based on those patterns, as specified by commands such as text-based prompts. Note that the underlying compositions and their recordings that are being used for AI training were created by humans and are owned by them or by business entities, unless they are in the public domain.

As noted by Hamid Ekbia in his AI column in the June Allegro, automated production of pieces via “cloning” of past musicians’ work is not new. Composer/scientist David Cope did pioneering work over 40 years ago on computerized distillation of the melody, rhythm, harmony and counterpoint patterns of various European classical composers, and algorithmic assembly of new pieces using their old patterns. (Check out David Cope’s “cloned Vivaldi” string orchestra piece.)

What is new since that period is the cumulative effect of decades of technological and business changes — many so old and ubiquitous that many of us take them for granted, including:

  • The dominance of digital sound recording.
  • Development of data compression, making audio and video files small enough for practical transmission.
  • The rise of the commercial internet and World Wide Web infrastructure, enabling easy sending and receiving of audio among individuals and businesses — file sharing, broadcasting, narrowcasting, streaming, etc.
  • Development of the internet’s “heavenly jukebox,” a decentralized repository of a huge share of digital recordings of humanity’s music — licensed and compensated, or posted without consent or payment — to be accessed by listeners, and to be ingested by firms training their generative AI systems.
  • Hundreds of millions of dollars being invested in the creation and deployment of music-related generative AI systems, by major global firms (Google/YouTube, Meta/Instagram/Facebook, Microsoft, ByteDance/TikTok) and newer startups alike (OpenAI, Suno AI, Udio, Soundraw, Aiva). Read more here and here.
  • The exploitative use of recordings, without their owners’ control, consent or compensation, to build generative AI music systems.
  • The speed, sophistication and relative low cost of computer hardware and software, bringing systems for automated assembly of human-sounding “music” recordings within reach of millions of people.

What does all this mean for those who earn a living by making music?

There is a long history of business owners using automation to cut production costs (including paying workers for our labor), increase production output and cut production time. Music work is not exempt from this “disruption” dynamic.

The broad category of “functional music” — music “not designed for conscious listening” as the CEO/co-founder of one functional sound company reportedly put it — is an early target for applying this hyper-automated production technology. Firms such as Endel, and tout how their generative AI systems can cheaply “create” endless “music” to accompany studying, exercising, driving, relaxing and even focusing.

Music for use in low-budget advertising, especially on social media, is also a ripe area. The webpage announcing the release of Meta’s AudioCraft generative AI music “toolbox” explicitly suggests imagining “a small business owner adding a soundtrack to their latest video ad on Instagram with ease.” They didn’t even need to add “without hiring any musicians.”

Web searches for “AI,” “royalty free music” or “AI stock music” bring up well over a dozen programs. Will end users pay stock music libraries for what they can produce themselves?

We already know that musicians are employed to compose and produce soundtrack music for videogames. Some of that work is under threat from generative AI, and work creating music for low-budget films is surely threatened as well.

The output of text-prompted song-generation systems like Suno and Udio suggests that the technology for assembling recordings of popular-genre songs with sung lyrics has advanced since I first encountered it last year on OpenAI’s “Jukebox” Web site. There also are related generative AI systems such as Google Lab’s “TextFX,” a collaboration with rapper/producer Lupe Fiasco aimed at “rappers, writers and wordsmiths.”

Generative AI “voice cloning” technology has been used to create new “performances” by deceased vocalists, and by a singer whose voice was impaired by a stroke. Given the advent of 3D animation projection, we can expect to see more AI-enabled “virtual artist” spectacles competing with flesh-and-blood entertainers.

We face the specter of a flood of AI-assembled music-like recordings (here I am echoing Michael Pollan’s term for hyper-processed “junk food”) diluting, corroding and weakening the market for human-made music. As generative AI makes it easy and cheap for people with smartphones and laptops to churn out recordings that are “good enough” (or, as Suno describes its mission, to “democratize” music creation), interest in paying for human-made music could decline.

As Lucian Grainge, chair and CEO of Universal Music Group said in his 2023 “happy new year” staff memo, “…some platforms are adding 100,000 tracks per day. And with such a vast and unnavigable number of tracks flooding the platforms, consumers are increasingly being guided by algorithms to lower-quality functional content that in some cases can barely pass for ‘music.’”

Audiences could form their tastes around machine-fabricated “music.” Those of us working in live music performance should take heed: this could eventually reach beyond recordings to affect the economics of the live-music enterprise, already reeling from the oft-noted “graying” of its audience base. See, for example, this commentary.

By the way, the effect of generative AI technology on employment in other “white collar” work areas is not exactly comforting. See examples in graphic arts, law (here and here), and movie production (here and here).

The danger to music workers’ livelihoods is clear; it is present; it is time for action. We — our union, our government, our legal system, and we individuals — must act to protect our economic interests, and our cultural interests as well. Here are some things to do.


Laws can rein in the rapacious use of AI and protect against use of our work without permission or compensation. We should demand that our federal and state lawmakers craft such legal protections. Some laws have been passed, like Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act. Others have been proposed and gained support, like the Generative AI Copyright Disclosure Act, and New York’s Digital Replica Contracts Act, which currently awaits Governor Hochul’s signature.

Many of these efforts concern visual media; we must make sure that strong laws pertain to music as well. The Authors Guild and a number of other groups representing professional creator workers crafted a set of elements that laws need to include to effectively protect our work against predatory AI. Let’s get the AFM to sign on, or craft an updated set of music-focused guidelines for new legislation.


Tech firms are being sued in the U.S. and elsewhere for infringement of copyrights, trademarks, rights of publicity and other infractions (see here, here, here, here and here). These sorts of cases may provide models for musicians and our union to consider. Another reason for following these cases; legislators often look to court actions as they consider bills to propose.


Participate in and support union and advocacy group organizing efforts to protect our rights and negotiate strong collective agreements. Inform our peers and allies about where and how generative AI can be used to kill jobs and lower pay. (To join the Local 802 AI committee, send an e-mail of inquiry to Local 802 Chief of Staff Dan Point at


  • Support development of ways to “watermark” recordings so their use by generative AI systems can be detected, even in the products those systems produce.
  • Ways to impede, disrupt or resist AI-powered theft and extraction of human creativity are being developed, mainly for visual media (see here, here, here, here and here).
  • Support the creation of similar protective tools for music recordings.

Allowing the blind imperatives of tech business to lessen the centrality of humanness in the millennia-old activity of music — an activity that uniquely binds heart, mind, body, individuality, sociality, innovation and tradition — would be a sacrilege and a grievous loss.

Jerome Harris, a member of Local 802 since 1979, is widely recognized as a unique musical stylist, garnering international acclaim for his incisive and versatile voice on both guitar and bass guitar. Read his full biography. Send feedback on Local 802’s A.I. series to


“It all sounds the same”


“Artful” Intelligence


“How are you going to stop AI from stealing our jobs?”


Unleashing Creativity