Allegro

Defining the Magic

Volume 125, No. 5May, 2025

Rayzel Fine and Lauren M.E. Goodlad

This month’s submission for Local 802’s A.I. series is by Rayzel Fine and Lauren M.E. Goodlad. Rayzel Fine is an undergraduate student at Rutgers, the State University of New Jersey. Rutgers faculty member Lauren M.E. Goodlad chairs the Steering Committee of the school’s interdisciplinary “Critical AI” initiative, and is the editor of the initiative’s journal Critical AI.

When I was in 10th grade, my English teacher tasked us with writing a poem that mimicked the style of Charles Bukowski’s “Defining the Magic.” Bukowski starts every line of his work with “a good poem,” before using figurative language to describe what a poem is and what it can accomplish. The result of this classroom experiment was a highly varied selection, ranging from verses about good apples to my own poem about good language. Although the assignment centered on imitation, each student wrote a unique, personal poem. Perhaps that is because as each of us drew on Bukowski’s form, we brought our own individual experiences to the work of defining the magic.

In this way, imitation doesn’t necessarily preclude an authentic voice–though it does complicate the issue. That is especially true, when the imitation in question is automated by “generative AI” systems that train on the data of human creators at an almost unimaginable scale. For example, in developing ChatGPT, OpenAI “scraped” most of the contents of the internet–after which it hired human data workers and paid them peanuts to help the system’s outputs become less toxic and seem more human-like. To train Suno AI, its text-to-music model, Suno Inc. gobbled up “all music files of reasonable quality that are accessible on the open internet.” “It is no secret,” the lawyers for Suno Inc. blithely admitted as they attempted to justify this daylight robbery, “that the tens of millions of recordings that Suno’s model was trained on presumably included recordings whose rights are owned by the Plaintiff in this case.”

In her article, “Poetry Will Not Optimize; or, What Is Literature To AI?,” the literary critic Michele Elam explores the contentious topic of AI-generated “literature.” As Elam shows the lack of complexity that such systems deliver, she introduces the term “algorithmic ahistoricity.” With the works of James Baldwin and Maya Angelou as her human touchstones, she describes how the technical process of statistical optimization–through which generative AI models predict the most probable next words or “tokens” in response to a prompt–enables the automation of human-like outputs. In doing so, AI models evacuate the particular histories that bring memorable human writing into being. In its place, they output statistical abstractions–similar to how a bell curve normalizes a complex reality by identifying the probable outcomes at the center of the curve.

In contrast to the mathematical dictates of optimization and efficiency, real-world artists contend with the “friction” of lived unpredictability, mayhem, surprise, and much else. As bodily creatures, they deal with “heartbreak,” “gut feelings,” and the exhilarating “eureka” moment of getting it right–sensations that no machine has ever experienced. As James Baldwin puts it (in a meditation on Black English that Elam cites), “A language comes into existence by means of brutal necessity, and the rules of the language are dictated by what the language must convey.” “Algorithmic ahistoricity” papers over this “brutal necessity” quite simply because silicon chips cannot experience necessity and, thus, have nothing to convey.

In writing this essay, I have directly cited Elam and Baldwin just as my tenth grade classmates paid homage to Bukowski’s premise of defining the magic. What is true for poetry, a rhythmic mode of language, is likewise true for music–a language of sound. Music also speaks to a world of brutal necessity which it strives to convey. In making music, musicians riff on their predecessors, bounce off their contemporaries, and groove to their inspirations; they create new and often improbable mixes of the old and the new. But when a model such as Suno generates a musical output from a user’s prompt (“Write a song that blends the style of ___ and __) it is doing something quite different. The point is not only that such text-to-music models can’t simulate the complexity that a living musician brings to composition and performance. It is also that the very premise of automated music imperils the future of a world in which living musicians can thrive.

When Suno and its ilk “scrape” content from the internet without crediting or compensating the artists who created this “training data,” they produce a musical meatgrinder. Through statistical sleight of hand, a text-to-music model grinds up centuries of digitized music and “models” it through computational pattern-finding and instructions to optimize for human-sounding outputs. The result is a kind of virtual jukebox that serves up musical hashes on demand. (“Give me incidental music for my video.” “Produce moody background sounds for my gaming platform.”) We can marvel at what statistics can do with the internet’s provision of so much content, and the power of resource-intensive computer chips and expansive data centers to transform costly data-mining into something that seems plausibly familiar to a human ear. We can even be excited at the prospect of the cool experiments that actual creative people might pull off with this shiny new toy. But in doing so, let’s not forget to define the magic–or to remember the political economy on which the making of vibrant music depends.

Text-to-music models spring from a Silicon Valley culture that seeks to mass-produce human-like creative content for the benefit of executives, founders, investors, and other businesses. As former Federal Trade Commission chair Lina Khan explained in October 2024, current antitrust legislation prohibits “unfair methods of competition.” That is, when a company helps itself to creative work and then uses this data “to compete” with these very artists and “dislodge them from the market,” they engage in “an unfair method of competition.” That is why last April more than 200 members of the Artist Rights Alliance signed a statement warning that generative music technologies were an “assault on human creativity” that could be “catastrophic” for working musicians, artists, and songwriters.

As guitarist Marc Ribot describes in a recent article in Vox, copyright is “literally the only economic right” that musicians have at their disposal. As such, copyright is part of how musicians support themselves, in effect by selling their labor in the form of musical composition and performance. Even if Sora AI outputs magicless music that fans won’t care about, the production of functional music supports us our ability to be and become our best selves. It is the soundtrack of our livelihoods that we can tune out. The generic pays for the exceptional to be made. Already, Spotify has undercut us through “ghost artists”: a small group of producers create hundreds of tracks attributed to fictional artists. Streaming companies are incentivized to flood their services with autogenerated lo-fi so as to avoid paying royalties (in 2024 alone, Spotify paid out over 10 billion dollars in combined publishing and recording royalties–including maybe you, Reader).

Copyright protection may not seem much like magic. But in its own way, it speaks to a time-tested ethos that, for all its imperfection, reverences credit and (in the ideal case) includes consent and compensation. When Ribot covered Jimi Hendrix’s “The Wind Cries Mary,” he used only the lyrics and the general tone of this classic work. But he made sure to list his composition as a cover and to pay the licensing fee for every listen of it. Behind that ostensibly dry procedural act is something more than simple economic justice–though as Ribot makes clear, that in itself is well “worth fighting for.” In a way that could easily be lost to the move-fast-and-break-things imperatives of technological “disruption,” Ribot’s protocols recognize that it was Hendrix who had first made the magic that he, in turn, could define in a wholly new way.

In a 1984 interview James Baldwin said, “When you’re writing, you’re trying to find out something which you don’t know.” That is, by definition, the experiences and knowledge that good writing conveys are not in the training data. This is a cardinal feature of the creative process that Silicon Valley developers do not get. As the satirist Alexandra Petri put it in a hilarious response to one of the dumbest ads ever aired on television, “the buffoons excited by the prospect of AI taking over all our writing–report summaries, data surveys, children’s letters, all tossed into the same pile indiscriminately–are missing the point in a spectacular manner. Do you know what writing is?

To this we must add a series of equally important questions:

Do you know what consent is?

Do you know what credit is?

Do you know what compensation is?

A good AI model will not be a meatgrinder for impoverishing artists in a world that, more than ever before, must define the magic. A good AI model will not privatize the living legacy of the world’s music and art for the benefit of a few wealthy investors and their expanding oligarchy.

As the signatories of the Artist Rights Alliance open letter agreed, unchecked AI “will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it.”

Music, they might have added, will not optimize.

Together we must define the magic and unite to ensure that those who may never have understood what music is for in the first place do not destroy the economic basis for our culture, the human artists who create it, and the publics who need it. It is not too late.

Here’s what you can do:

> Join the 50,000 creative-artist signers of Ed Newton-Rex’s Statement on AI Training: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of people behind those works, and must not be permitted.”

> Join the Recording Musicians Association NY, to make sure that when the major labels do license our work for training AI companies, the AFM makes sure that every musician who performed on them is justly compensated.

> Join Local 802’s Artificial Intelligence Advisory Committee (AIA802), to work with the union to ensure a strong AFM legislative response. For information, reach out to Jerome Harr at jeromeharr@aol.com and Harvey Mars at hmars@local802afm.org.


OTHER ARTICLES IN THIS SERIES:

Watermarking and Fingerprinting Explained

A.I. LEGAL UPDATE

A FEW WAYS A.I. COULD BENEFIT MUSICIANS

The TRAIN Act is a good start in protecting musicians from A.I. exploitation

Case Tracker: Artificial Intelligence, Copyrights and Class Actions

BIG MUSIC AND A.I.

So how does A.I. actually work?

Protecting musicians from the existential threats of artificial intelligence

A DEEP DIVE INTO HOW A.I. AFFECTS MUSICIANS

“It all sounds the same”

“Artful” Intelligence

“How are you going to stop AI from stealing our jobs?”

Unleashing Creativity