Advertisement

Dead Beatles, Fake Drake and robot songwriters: Inside the panic over AI music

A trippy collage featuring The Weeknd, Drake, John Lennon and Grimes
The Weeknd, from left, Drake, John Lennon and Grimes.
(Ross May / Los Angeles Times; photos by, from left, Rich Fury / Getty Images for dcp; Evan Agostini / Invision / AP; Reg Lancaster / Daily Express / Hulton Archive / Getty Images; Jamie McCarthy / MG21 / Getty Images for The Met Museum / Vogue)
Share

Inside the Mayk.It app, I don’t have to work hard to sound nearly perfect.

Mayk.It is an AI-powered music startup, funded with an initial $4 million investment from major venture capital firms like Greycroft, former Spotify executive Sophia Bendz and celebrities like YouTuber MrBeast and voice-tweaking enthusiast T-Pain. The Santa Monica-based company hopes to do for singing and production what Instagram did for photography and TikTok for video editing — make it uncannily easy to express yourself at a semiprofessional level on social media.

Artificial intelligence is the talk of governments and industry today. In the arts, screenwriters, illustrators and musicians are nervously eyeing the tech’s potential and the possibility of being outmatched (and laid off) in favor of such software.

On the roof deck of Mayk.It’s office, co-founder Stefán Heinrich Henriquez played a video from a forthcoming version of its app, Covers.ai. The clip showed him capably singing Taylor Swift’s “Shake It Off,” only he’d barely hit a note. He’d created an AI model of his voice, and the app rendered a video of him performing it. It could have made him sing anything, or with the app’s built-in voice models, sing as anyone else.

“During my time at TikTok, I saw a lot about how music is changing,” Henriquez said. He and co-founder Akiva Bamberger, both in their 30s, previously worked at TikTok, YouTube and Snapchat. “It’s not just about musicality anymore. It’s quantity over quality. If you try to perfect just one song, you have less chances to hit an algorithm on any of these platforms.

Advertisement
Subscriber Only Content

Subscribers get exclusive access to this story

We’re offering L.A. Times subscribers special access to our best journalism. Thank you for your support.

Explore more Subscriber Exclusive content.

“This is a new approach,” he continued, still buzzing after attending the SoCal EDM festival Lightning in a Bottle. “I’m not even thinking about instruments anymore. Do I need to combine a beat with a melody and lyrics and an actual voice? Maybe I don’t. I think you still want to put in some piece of yourself, but with less and less work. We have to simplify even further and make more decisions for a beginner so that they don’t get lost in decision paralysis. That’s where AI comes in.”

Later that week, I cracked open Mayk.It to find out how easy it can be to sound great. Since its debut, users have created millions of tracks, according to the company’s co-founders. The app offers an array of backing beats, from Afro-pop to country to ambient, made by human producers. I chose “Seductive Nights,” a moody R&B loop, and turned to Mayk.It’s ChatGPT-powered lyrics generator for inspiration (Mayk.It also uses its own proprietary voice-synthesis technology).

Caryn Marjorie, an influencer with 2 million followers on Snapchat, recently made a digital clone of herself. But what happens to social media creators when the robots come for their jobs?

June 27, 2023

Worried about AI decimating your livelihood? Mayk.It can put that into couplets. “Will I ever find my way?” it suggested as vaguely ominous lyrics, after my prompt about feeling insecure about the coming AI wave. “Can I trust the dreams I make? / Is my future safe?”

I hit record and sang a melody, but I barely needed to. Mayk.It can correct your pitch, add effects, edit audio and mix it into a tight draft with as few clicks as it takes to add an Instagram filter. Mayk.It can then drop your finished track into its own social platform, or TikTok and Spotify.

“We can help more people participate in music,” Bamberger said. “We’ve done the work of going into the studio for you.”

Two tech bros in a booth, one pretending to sing into a microphone
Akiva Bamberger, left, and Stefán Heinrich Henriquez, co-founders of Mayk.It.
(Jay L. Clendenin/Los Angeles Times)
Advertisement

Mayk.It is one approachable use case for AI’s potential in music. But it’s telling that a technology once deemed science fiction can now so casually do the job of singer, producer, audio engineer and label executive, right from your phone.

From the AI-generated “Fake Drake” and the Weeknd collaboration that caused a ruckus this spring, to a forthcoming new Beatles track constructed from dusty demo tapes set for a September release, to Universal Music Group’s big investment in generative ambient music (i.e., created automatically without human musicianship), to shellshocked film composers and frantic label executives, it’s obvious AI could rattle the music industry just as MP3s and Napster did decades ago.

But while AI’s technology can be gobsmacking, its practical uses and legitimate dangers aren’t yet clear. The ethics and applications of this fast-advancing technology are still up for grabs. Will it be an extinction-level event for the industry, or a tool, like sampling or drum machines, that did not, despite much initial hand-wringing, replace musicians?

Paul McCartney clarifies that ‘nothing has been artificially or synthetically created’ in new Beatles song that incorporated AI technology to enhance John Lennon demo tapes.

June 13, 2023

The rhetoric around AI can lean apocalyptic. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” read one open letter from the Center for AI Safety signed by dozens of tech industry figures like Sam Altman and Bill Gates, as well as Rep. Ted Lieu (D-Torrance).

After the 2022 debuts of services like ChatGPT and Stable Diffusion, which conjure close-enough text conversation and compelling images out of bare prompts, massive impacts on industries are undoubtedly en route. The Biden White House, in its “Blueprint for an AI Bill of Rights,” said that “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.”

“The history of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative AI,” said Lina Khan, the Federal Trade Commission chair, in a May op-ed. “We once again find ourselves at a key decision point.”

Advertisement
Glitch art line break

Music may seem marginal compared with sectors like biotech or defense, where AI is in full swing. But file-sharing services like Napster helped siphon tens of billions of dollars from the entertainment economy, and Spotify trained consumers that media should exist in an ephemeral cloud rather than on objects you own. Those impacts resonated well beyond the pop charts.

From the advent of vinyl records to samplers and 808s, from studio software like ProTools to media platforms like iTunes and Spotify, technology has always shaped and reshaped how music is created and consumed. AI is already present in many of the ways we interface with music, from recommendation algorithms to production tools like voice modulators.

Martin Clancy, a researcher at the Centre for Digital Humanities at Trinity College in Dublin and editor of “Artificial Intelligence and Music Ecosystem,” said that the difference between this wave of AI and prior transformative tech is its sheer adoptive speed.

“There used to be a time lag between the invention and social adoption of tech. How long was it from the turntable being invented to DJ Kool Herc? A hundred years,” he said. “It took 30 years for samplers to become affordable consumer items. AI is different because of the scale. The cost is cheap, so money is being poured in, and all the tech is stacking on top of itself. We’re not only getting unintended consequences but unintended designs.”

Labels are racing to respond to these tech advancements, like Meta’s new AI-powered text-to-music generator trained on 20,000 hours of licensed music. In June, Sony Music created an executive VP of AI position. At a recent Universal Music Group retreat for top executives, Michael Nash, executive vice president and chief digital officer for the label group, said that AI was a prominent topic: “Every single label CEO said, ‘I need to talk to my team about AI.’”

Advertisement

Many credit “Heart on My Sleeve,” a serviceable simulacrum of Drake and the Weeknd produced by TikToker Ghostwriter977 using AI technology, as a wake-up call to the tech’s norm-shattering potential. (UMG’s stock lost nearly a fifth of its value in the weeks after the track took off on social media; at the company’s behest, the track was eventually removed from streaming sites.) A version of Hole’s grunge classic “Celebrity Skin” with AI-swapped vocals from frontwoman Courtney Love’s late husband, Kurt Cobain, sent Gen X reeling. A British indie band, Breezer, sidelined its own lead singer in favor of AI-modulated vocals from Oasis singer Liam Gallagher, who approved of the gambit. “Heard a tune it’s better than all the other snizzle out there” Gallagher said on Twitter, about “AISIS.” “Mad as f— I sound mega.”

The fear that artists could be replaced by their own digital avatars, or that labels could conjure infinite music based on digital modeling, became a live issue.

But several executives said that such obvious copyright infringements were stunts, not serious threats.

“It’s a shame that the early headlines came from voice and likeness imitation,’ said Mike Caren, the founder of APG, the independent publishing and record company that first signed YoungBoy Never Broke Again, Don Toliver and Charlie Puth. “It shouldn’t be a given that AI can learn from all recorded music, that’s a poisoned well. But Fake Drake was human songwriters with a voice filter on it, that’s nothing innovative. I hope it doesn’t sour people or limit their thinking on AI.”

“The tech is in service of the artist and their intent, rather than artist replacement in service of tech,” Nash said. “We obviously take a strong position on protecting rights.”

Even a more radical artists-rights activist like Kevin Erickson, director of the Future of Music Coalition, is skeptical that AI could outright replace artists, or will inevitably learn on the backs of protected work.

Advertisement

“I get a sense that there are built-in limits to how much public interest there will be in using AI to have a digital approximation of a singer that didn’t actually perform,” he said.

Referencing a recent Supreme Court decision about fair use, he said that “coming out of the Warhol case, the law doesn’t favor a reading of allowing permission-less ingestion of wide swaths of recorded music for training. We already have policy tools available to make meaningful interventions.”

Glitch art line break

The deeper concerns may lie in the subtle shifts of power and incentives that come from knowing how capable AI is at making music.

The music industry has been upended by so many forces — streaming replacing downloads, COVID decimating touring, inflation hiking costs — that many artists and composers fear a body blow from AI could be fatal to their careers.

Shruti Kumar, 35, is a film composer and producer who has worked with Hans Zimmer and Henry Jackman, collaborated with Alicia Keys, Nas and Fiona Apple, and conducted at Walt Disney Concert Hall. Her father worked at DARPA (the U.S. military’s tech research division) and her mother was an economist; Kumar is not naive about this tech’s potential.

Advertisement

When she talks to peers about AI, “The mood is negative because we’re already scared as it is,” she said. “We don’t have a union, and we’re deep in our own struggles with streaming. The economics of music are already not sustainable.”

She’s worried that the executives who will create policy around AI have fundamentally different incentives than artists. “Will creative work be valued in a business where executives are finance and tech people?” Kumar asked. “Screenwriters are on strike right now to prevent themselves from becoming [treated] like musicians.”

A female music composer sits at a piano
Composer Shruti Kumar.
(Ellie Pritts)

Erickson said that the creep of AI is already being felt in more behind-the-scenes capacities, like library music used in TV shows, podcast background music or syncs for advertisements. Those unglamorous corners of the music business provide necessary income for many working artists and composers, and he fears a general devaluing of that work when AI can do it capably for near nothing.

“Large services like Spotify have been in a battle to lower royalty costs, and now they potentially have the ability to swap out certain parts of their catalog for AI approximations, like study or sleep playlists or generic workout music,” he said. “Spotify’s ‘discovery mode’ is taking advantage of its platform power to steer listeners towards something it pays less for. That’s downward pressure on the music licensing landscape, even if you’re not making the kind of work that AI can generate.”

(In June, the Recording Academy announced new eligibility rules for the Grammys, so that only “human creators” can win awards. Compositions may incorporate elements of AI in the vocals or instrumentation, but songwriting entries must be primary written by people.)

Advertisement

While Fake Drake will probably not cause the real one to have to sell off his estates, some AI-driven ventures like UMG’s partnership with Endel, an “AI sound wellness company,” may give musicians pause. Though ambient background music for streaming playlists rarely draws attention, it’s a huge and lucrative corner of Spotify and other platforms.

Services like Boomy, a Bay Area music tech company, are already using AI-generated music to flood Spotify, with varying legal and contractual obstacles. Even if AI doesn’t outright replace artists, it’s still a thumb on the scale for a cheaper alternative to paying live musicians.

“Even before AI came on the scene, the economics of the music industry weren’t working for the vast majority of artists,” says Gavin Mueller, a professor of new media at the University of Amsterdam and the author of “Media Piracy in the Cultural Economy.”

Glitch art line break

Alongside all the peril, some artists and executives wonder how AI could also be used to their advantage, or at least relieve some of the drudgery of making music.

Advertisement

Maaike Kito Lebbing, an L.A.-based Australian electronic music producer who performs at Kito, says AI has changed her career for the better. When Grimes, the avant-pop artist, released Elf Tech — an AI-powered voice modulator that could drop her distinct tone into any track — Kito made fast use of it to write and produce “Cold Touch,” a fully licensed single using Grimes’ voice.

The song deployed Kito’s formidable production and melody-writing skill (she’s cut an official Beyoncé remix), and given Grimes’ already alien voice, the lovelorn, digitally disembodied track worked on its own terms. The song earned global attention and a rave from Grimes, who deemed it an official collaboration, promoted it and split revenues 50-50.

“I directly collaborated with an artist I like, so the experiment was liberating,” Kito said. “AI is like any piece of tech — MIDI and fake strings, Auto-tune, things to fix pitch — that is now widely adopted.”

A female DJ spins music at a party
Kito deejays at an Art Basel party in Miami, 2021.
(Thaddaeus McAdams / Getty Images for Ocean Drive)

Caren sees these credible early adopters as a healthy sign for the tech’s future.

“Songwriters should embrace and support ethically and legally driven tech, to protect against rogue and bad actors from leading the space,” he said. “Hardworking, fearless risk-takers will wind up with rewards.”

AI’s magic is evident. After director Peter Jackson and dialogue editor Emile de la Rey used voice-separation AI to clarify the dialogue in the Beatles documentary series “Get Back,” and producer Giles Martin used AI to separate tracks on a “Revolver” reissue, Paul McCartney resurrected a demo of John Lennon singing and playing piano. The original 1978 composition was recorded shortly before Lennon’s death, onto a muddy cassette labeled “For Paul” with several other demos that were later released. Until recently, it was unrecoverable to professionally edit. But with AI, the world will soon have a “final” new Beatles song.

Advertisement

“We had John’s voice and a piano and [Jackson] could separate them with AI. They tell the machine, ‘That’s the voice. This is a guitar. Lose the guitar’,” McCartney told the BBC this year. “It’s kind of scary but exciting, because it’s the future. We’ll just have to see where that leads.”

McCartney later clarified that the track did not use anything as garish as an AI-generated John Lennon voice. “We cleaned up some existing recordings — a process which has gone on for years,” McCartney wrote on Instagram. “To be clear, nothing has been artificially or synthetically created. It’s all real and we all play on it.”

Beatles’ drummer Ringo Starr told Rolling Stone that George Harrison had also recorded parts for the song before his death in 2001. “This was beautiful,” Starr said. “It’s the final track you’ll ever hear with the four lads.”

Capitol Records, part of the Universal Music Group, will release the track, rumored to be titled “Now and Then.”

“If you don’t get chills” from a potentially final Beatles track, Nash said, “your heart might not be beating.”

Few listeners will probably get such chills from anything passing through the slipstream of a platform like Mayk.It. That’s part of the point, said Henriquez.

Advertisement

“What is a song for? Is it for the charts, or can it just be made for social media or video games?” he asked. “I think AI will give music a new medium. Like, a whole new purpose for it to exist.”

Advertisement