It started last October, not long after ChatGPT launched. By Grammy Week in February, it was all the industry was talking about — and that chatter got louder later in the month when David Guetta channeled Eminem on a song through artificial intelligence. The volume hit 11 in April, when “Heart on My Sleeve,” a song with AI-generated vocals by a fake Drake and a fake Weeknd, racked up millions of streams before being removed by streaming services, and louder still when electronic artist Grimes not only promised a 50-50 split with anyone who wants to use her AI voice on a song, she launched software called Elf.Tech to help them do it.
Artificial intelligence by way of machine learning is the latest existential threat to the music business, and unlike the frequently cited precedent of Napster-era piracy, which opened the door to illegal downloads, the industry has mobilized quickly to respond, with takedown orders, petitions, op-eds and the Human Artistry Campaign, an initiative established to set fair practices in AI, not just in music but in other arts and even sports; Human Artistry’s dozens of members range from the Recording Academy to the Graphic Artists Guild.
The questions around AI and creators’ rights are so head-spinning it’s hard to know where to begin: If David Guetta uses ChatGPT to create a fake Eminem verse for a song, who gets paid? Should it be Eminem, or could it fall under fair use or even parody, which is protected by the First Amendment? Should it be the engineers of ChatGPT — or, since the machine did not create the verse completely by itself, should it be the music that was programmed into the technology that enabled it to create fake-Eminem’s rhymes? This doesn’t even begin to get into the publishing issues — or, to cite just one example: How can the technology that monitors copyright on streaming services decipher whether a sound-alike is a parody or simply a reverent influence? (Guetta largely sidestepped the issue by not commercially releasing his AI Eminem song.)
An industry that saw its value cut literally in half by the rise of illegal downloads two decades ago is determined not to let the same thing happen again. Instead, it wants to harness the upside that AI can deliver while protecting the business from costly consequences. “We don’t want to repeat the mistakes of the past,” says Jordan Bromley of the law firm Manatt, Phelps & Phillips, “where we see new technology and believe the sky is falling.” He adds, “We think there is a proactive way to do this to help not only embrace the tech but protect creators from the worst-case scenario.”
“I think we are cautiously optimistic,” says Danielle Aguirre, exec VP and general counsel at the National Music Publishers Assn. “A lot of writers are embracing the technology and are using it as part of the creative process. We’re looking for a path forward where a lot of these AI platforms can respect the value of the musical works they’re using to ‘train’ their platforms, and we can find a way to work with them to license the uses.”
Asks musician, voice actor and copyright activist Dan Navarro: “Am I afraid of it? No. But am I concerned and aware? You’d better believe it.
“To say AI is the devil would be naïve, and I already get regular grief from associates who call me a crybaby for wanting royalties from streaming to be higher,” he continues. “It’s fun to characterize creative people wanting to protect their livelihoods as luddites or crybabies. But our property rights exist, our rights of publicity exist, and copyright is still significant.”
RIAA CEO Mitch Glazier say’s the coalition’s statement of principles “came together very quickly,” with the drafting process starting in January. “It reminded me a little bit of when the entire community came together very quickly on COVID relief to make sure that the whole community was educated and had a resource that they could go to together and push out and let people know what they could do.
“This was obviously a different situation, but I think probably the launch of Chat GPT, like it did for the rest of the world, woke up the music community to just how fast and powerful this technology is and how it can be used for awesome things, but it can also be used for not awesome things. We needed to be very clear, very quickly about how the music community views it and to plant a flag and develop principles.”
Music professionals and trade group executives who monitor AI’s progress think the industry is far better prepared to deal with the technology’s potential challenges than it was to combat the wave of peer-to-peer file sharing that followed Napster’s 1999 launch.
“Obviously Chat GPT made a lot of people realize how close the next stage of AI is,” says Tatiana Cirisano, an analyst for U.K.-based Midia Research. “But it’s not as if we haven’t been living with AI in our daily lives for years, and even in music-making. It’s been a steady progression.”
Jacqueline Sabec, partner at King, Holmes, Paterno & Soriano, added, “My general belief is that artists are going to do what they’ve always done and ultimately embrace the technology and create things that we’ve never seen before or thought of to entertain us and drive human development.
“The biggest threat is the economic threat,” she concludes, “but we’ll probably figure out the economic solutions, as we’ve done before with photocopy machines, recorded music, Napster and YouTube.”
In fact, many feel that AI can actually be used to police copyright infringement, whether committed by humans or machines. Engineer and attorney Matthew Stepka, who was previously VP of business operations and strategy for special projects at Google and now lectures at the business and law schools of University of California, Berkeley, and invests in AI ventures, notes that AI has the potential to be an effective plagiarism detective.
“With YouTube, they did fingerprinting on music so if it’s played in the background, the artist can get paid, but it has to be an exact copy of a commercially published version,’ Stepka says. “AI can actually get over that hurdle: It can actually see things, even if it’s an interpolation or someone just performing the music.” Stepka and Sabec note that just as performing-rights groups collect on radio play and other uses, AI could be deployed to detect and collect on instances where a recording breeches copyright.
“If AI listens to music and any derivative content with an algorithm to identify where the music originated, and creates a mechanism to collect revenue generated by that content with the ability to then pay the content creators, that could be a huge benefit to artists,” Sabec says, then takes it one step further. “If the AI can identify infringements from humans and from technologies, then do we really need a jury to decide copyright cases?,” she asks. “You don’t want the law to be arbitrary — you want it to be precise. And unfortunately, in this space of copyright law, jurors and judges aren’t necessarily good at evaluating music cases. In some ways, this is a natural problem for AI to solve.”
However, one area where most parties do see a threat is in music library services that provide royalty-free “background” music to content producers. “Royalty-free music libraries came about in the first place for content creators who couldn’t afford to license popular music, or when the labels and publishers won’t take the time to license something to a smaller creator,” Cirisano says. “There are many artists who earn income by creating production music, but now AI is taking that over.”
While we wait to see how things unfold, here’s a brief overview of issues that labels, publishers and creators’ advocates want to see resolved.
TRAINING AI WITH COPYRIGHTED MUSIC
“The companies that scrape licensed works [to train platforms] should have a license,” says Bromley. “I think they’ll likely argue fair use, and the courts will decide whether that’s right or wrong.”
A blanket license, which is issued by rights holders and provides for the use of their catalog for a predetermined period, is the solution most often suggested by creators’ representatives, although an opt-out would need to be incorporated for those artists or writers who don’t wish to participate. However, “opting in also has its own set of problems, as we’ve seen with YouTube,” says Sabec. “Creators either opt in to the content management system or spend tons of money on legal fees shutting down infringements, only to have a new one appear the next day. Maybe a clever engineer and AI could help solve this problem.”
COMPENSATION FOR PAST TRAINING
Any new licenses would almost definitely require compensation to creators and copyright holders for training that has already used copyrighted material, a precedent most recently emphasized when music publishers settled with Peloton. “A lot of the AI platforms understand that there’s going to have to be some licensing for the use of these songs to train their platforms,” says Aguirre.
Adds one high-ranking label executive, “At this point, we have to consider pumping the brakes on the output with the AI that’s already created that might have been trained on intellectual property. once we have a standstill on the bad practices, I think we can look at how to deal with the consequences of what’s happened in the past.”
Navarro says he won’t be surprised if the issue comes up sooner rather than later. “Someone’s going to test it. My personal opinion is that we’re looking at the Supreme Court within two years.”
ADVANTAGE, HUMANS
Why should companies have to pay to train their AI platforms with copyrighted works when human songwriters are also influenced by the compositions they’ve heard? “I can imagine AI platforms making the case that all new work is informed and inspired by copyrighted works, and creators don’t pay every time we’re inspired,” says Michelle Lewis, executive director of Songwriters of North America. “But copyright law draws and holds the lines between inspiration, originality and infringement.
Mike Fiorentino of indie publisher Spirit Music Group argues that human composers do compensate the musicians who inspire them. “Let’s say I wanted to write a song à la Led Zeppelin,” he says. “My dad bought the LPs and cassettes, I bought the CDs, and I also listen to radio, where ad dollars are being generated,” he says. “But if you feed a bot nothing but Led Zeppelin, that bot isn’t influenced by Led Zeppelin — you fed it data. Did that data get paid for, and what about those copyrights?”
Aguirre adds, “Of course you’re going to need copyrighted music to train AI, but you have to pay for it in the same way that people who buy records or CDs or subscribe to Spotify are influenced by and are trained on that music.”
HUMAN RIGHTS, AND COPYRIGHTS
To date, the U.S. Copyright Office maintains that only works created by humans can be copyrighted, an assertion it upheld in February when it revoked a copyright to Kristina Kashtanova for the graphic novel “Zarya of the Dawn,” which was partially created using generative AI. The office had initially approved a copyright for the work when it was not aware of how it had been produced. However, San Antonio-based attorney Van Lindberg appealed the withdrawal of the copyright. In a partial victory for Kashtanova, the Copyright Office issued a copyright for the work’s text and the arrangement of written words and art, but would not issue one for the AI-generated images.
The Copyright Office’s stance has been echoed in the courts. The U.S. Supreme Court recently refused to hear the case of computer scientist Stephen Thaler, who had challenged a series of refusals by lower courts to allow his AI system DABUS to be designated as a patent inventor. Thaler had argued DABUS should be recognized as an “individual.”
“Fundamentally, I agree with what the Copyright Office is trying to say, which is that copyright is meant to be about human creativity and human creation,” says Aguirre. “That said, today artists and songwriters use AI tools in their songwriting and musical process. We want to see how this develops and whether or not copyright law needs to be changed to reflect the way that AI both uses music and also generates musical content.”
There has been enough heat on the issue to prompt the Copyright Office to issue a statement on March 16 declaring that “public guidance is needed on the registration of works containing AI-generated content” and that it has “launched an agency-wide initiative to delve into a wide range of these issues.”
CROWDING AN ALREADY CROWDED MARKET
The major labels’ market share is already being impinged on by a gusher of independently released music, with as many as 100,000 songs ingested by streaming services every day. Likewise, the year’s biggest hits account for a smaller share than was the case just a few years ago.
Market share is a prime determination in how the major streaming services pay labels, and their share is being cut into by “functional” music (i.e., low- or no-royalty “mood” music). AI could similarly offer streaming services options to reduce their obligations to labels.
However, “the majors create mood music too,” says Cirisano. “Who’s to say they won’t get into AI-generated music themselves?”
NUTRITION FACTS
The sixth principle expressed in the Human Artistry’s manifesto is “Trustworthiness and transparency are essential to the success of AI and protection of creators,” a sentiment that Michael Nash, Universal Music Group executive VP and chief digital officer, agrees with. “We ultimately want transparency and visibility,” he says. “The same way that food is labeled for artificial content, it will be important to reach a point where it will be very clear to the consumer what ingredients are in the culture they’re consuming.”
While no one thinks the industry is approaching a scenario where a fake Taylor Swift can vanquish the real one just yet, there are few illusions about the potential — and the threat. “AI’s going to get smarter, better, quicker, deeper, richer,” Navarro says. “Having learned from all those previous fights with new technology, this means I have to get better so I can stay one step ahead of the machine.”
VIP+ Analysis: Gen AI Explored From All Angles — Pick a Story