I
t’s a really bad word in some circles,” says Lumineers producer David Baron. “I know people who hate AI with a passion, but wouldn’t say it out loud.”
“People don’t really admit to what extent they’re using it ,” says songwriter Michelle Lewis. She describes the atmosphere around AI with her peers as, “Don’t ask, don’t tell.”
The CEO of Suno, Mikey Shulman, recently described AI use as “the Ozempic of the music industry — everybody is on it and nobody wants to talk about it.”
It’s a fraught, almost pregnant moment in the music industry with AI. Baron admits that there can be a “social penalty” around using it. Teddy Swims experienced a version of that in November when he called AI music tools “truly amazing” — and faced a fan backlash online. At the same time, you don’t see the same level of public uproar from artists and industry people that we saw just a couple of years ago in the wake of the first shockingly convincing AI artist clones, like the Drake-Weeknd simulacrum “Heart on My Sleeve.” “No one wants to be left behind, or come across as old-school,” says Lewis, who has written songs for Cher, Hillary Duff, and others, and co-founded the nonprofit advocacy organization Songwriters of North America.
Mostly behind closed doors, AI-powered tools are causing profound shifts in how music is being made at all levels of the industry. It’s found a place in the studios and digital audio workstations of many of the biggest producers and songwriters in the world. Lauren Christy of the Matrix, who has written for everybody from Avril Lavigne and Britney Spears to Liz Phair, says, “The train has left the station.”
How much AI-assisted music is already on the Billboard charts? Baron hasn’t himself seen artists or producers submit fully or partly AI-generated music to labels, but says, “I guarantee that’s happened.” “We don’t have the detection software, really, that’s effective yet,” says Lewis. “So, if you can’t tell, then how can you enforce it?” The industry is essentially relying on the honor system. Speaking about a survey of music producers by the audio-tech company Sonarworks, the CEO reported “many anecdotes about artists submitting AI-generated songs as their own, and labels not being able to detect them.”
For the most part, professionals aren’t asking Suno for full songs and uploading them onto Spotify. But Jay-Z’s longtime producer, DJ, and engineer, Young Guru, says it’s become common for hip-hop producers to make funk and soul samples out of AI, rather than license original music or hire musicians. Guru guesses that “more than half” of sample-based hip-hop is being made this way now. He still pays for samples or hires musicians to interpolate them, but producers who don’t have the budgets or inclination now have a shortcut. “They’re getting really good at prompting now,” he says. “Where before it was just ‘Give me soulful 1960s whatever,’ now it’s ‘Give me 1960s music as if it was recorded in Motown and this person wrote it,’ or ‘Give me 1970s music as if it was recorded at Stax if this person wrote it and this person played bass.’”
In a recent survey of more than 1,100 producers, engineers, and songwriters by Sonarworks, seven out of 10 respondents said they were at least occasionally experimenting with AI tools, and one out of five were regular users. Most of them are using specific tools for narrow, time-saving tasks, such as restoring audio, isolating instruments and vocals within songs (a.k.a. stem separation), and mastering records. Baron says stem separation is “phenomenal”: “Last night, I isolated a vocal, and it sounded like it was recorded in a pristine studio by itself — something that was not possible, like, even two to three years ago,” he says. “That’s a humongous change.” Matching the sonic feel of another record, something that might have taken hours or days before, can now be done in minutes. “I can take an album that I love the way it’s mixed,” says Guru, “say, Dr. Dre’s 2001, pick a song that I want my mix to be tonally like, and apply that tone to my mix.”
Using AI to fix vocals, or even layer in AI-generated vocals, is something artists and producers are split on. “I’ve heard of people using AI vocals on big records to flesh out the backgrounds,” says Christy. She doesn’t use AI vocals on her own finished songs, but has been impressed by the quality of the voices. “The AI singing robots have swag and land right behind the backbeat,” she says. “One of the most amazing singers I work with told me, ‘I hate this robot. She’s singing it better than I am .’” Swims talked about fixing stray words on a song with AI while he was in Australia, which saved him “going [to] the studio and doing the line 15 times and spending time and money and effort.”
Charlie Puth used a tool called Replay to quickly try out ideas for his upcoming album, Whatever’s Clever. “There’s a setting [that makes] a mono vocal sound like eight to 10 people singing,” he told Rolling Stone recently. “I’m gonna use that to see if I even want the choir sound on there. And then we take it off and replace it with a real choir.
“That’s the correct way of using AI,” he said. But he draws the line at using generative-AI on a finished track: “Making a song, and then typing in ‘Make this sound different,’ and it’s spitting out a completely different production with stems is nauseating.”
Producer Nathan Chapman, who has worked with Taylor Swift, Keith Urban, and many others, hasn’t used AI for any vocals or instrumentation, but has gotten requests to change lyrics with it. “I said no, because I hadn’t learned how to do it yet. But it’s the artist’s call,” says Chapman. Still: “I would just rather have them sing it again.”
Songwriters in Nashville and Los Angeles will use tools like Suno to turn lyrics and chords into fully arranged demos that they can shop to artists and labels. “In private, songwriters are saying, ‘It’s kind of awesome,’” says Lewis. “You don’t have to split your copyright; you can write by yourself; and you don’t have to pay a producer. For a lot of songwriters it’s been very empowering.”
“It can make it a more fluid process,” says Lewis. Christy says she was recently texted by a “big star” she works with, asking if she had any songs for them. She was able to reply immediately with a demo. “All the melodies, lyrics, and chords [were] mine,” she says. The artist said on the spot that she wanted to record it. “I was like, ‘Whoa — that just saved me days.’”
OF COURSE, FOR every task that AI streamlines, there might be someone on the other end who isn’t paid anymore: a demo musician or producer, an assistant engineer who helps with mixes, a studio owner renting time, maybe a Seventies songwriter living off of licensing fees. Nashville native Chapman says he’s hopeful that the doors AI opens to amateurs will lead to a boom for musicians and studio owners down the road. But for now, “there are less sessions happening,” he says. “It’s hurting the demo community.”
“I work in the children’s-animation space,” says Lewis. “That’s kind of low-hanging fruit for being replaced by AI.” She says she is still getting jobs, but overall, “no one’s working.” Stock music, or production music — music licensed by companies for TV, radio, and other media — is “toast,” says Lewis. Some large companies, like Disney, won’t use AI music because of potential copyright issues, but smaller production companies that need to save money are “finding the line of what they can get away with.”
Baron worries about the pipeline of future producers. Mundane work that used to be the province of assistant engineers — the work that AI is replacing — is “where you’re training the next generation,” he says. “Our generation is going to go away eventually, right? And we need those 25-year-olds trained.” Aspiring producers have to be in the studio, see sessions, watch musicians interact. “Maybe in the best-case scenario, assistants will just be able to do more stuff,” he says.
Then there are the unintended creative consequences that AI can create. Recently, Lewis’ writing partner made a demo with AI, but the vocal wasn’t quite right. So Lewis decided to sing the part herself — and realized she couldn’t, because there were no pauses in the vocal for taking a breath. “You can end up writing a song that is technically unsingable by a human.”
Chapman worries about “demo-itis” — the tendency of artists to get attached to a demo and want to replicate it exactly. AI demos give that a strange new wrinkle, because they can sound both perfect and off. With human demos, “usually what you’re contending with is the demo is crappy, but it’s super cool,” says Chapman. “I haven’t heard a Suno demo yet that sounded, like, bad good,” Chapman says. “They’re all just … good.”
There are little AI sonic “tells,” the equivalent of ChatGPT’s tendency to use em dashes and certain phrases, that Baron notices. “It inserts strange measures,” he says, “like a weird 2/4 bar that no one would write.” Baron tells the story of a drummer friend, a virtuoso who plays with artists with billions of streams, being asked to replicate an AI drum part. “He found it humiliating.”
Christy mentors aspiring songwriters, and like Chapman, loves that AI can potentially make their path easier. She imagines “a young songwriter who can’t really sing who is potentially the next Diane Warren. And she has no money to make demos. AI is the perfect tool for her, because she can still retain ownership of her song and give an example of her songs that are produced at a pretty high level.” A darker scenario is that more pathways into the music industry for more music mean that the pie is going to get sliced up smaller and smaller: “We already have a flood of so much content,” says Baron. “It’s so hard to get noticed when there’s 60 to 100,000 songs being uploaded a day. So, what happens when AI music makes it so it’s 300,000 tracks a day? And I think that’s going to happen.”
“No one’s selling, like, 10 bricks of coke anymore,” says Young Guru, talking about music’s diminishing returns. “It’s just, like, a bunch of dime bags. It’s a numbers game.”
An enormous issue is that the copyright issues with AI remain mostly unresolved. There are ongoing questions on whose music services Suno and Udio were trained on; how to figure out what music is being recombined in their output; and how to pay artists and songwriters for it. Even AI enthusiasts say they aren’t going to use AI in their released music until that’s resolved. “I’m scared to use anything,” says Christy. “I would hate to have an AI detector detecting something.”
Meanwhile, the AI music services are making their pitch. “They’re hosting camps,” Lewis says. “Udio’s hosting camps, Suno’s hosting camps. Introducing this software to the pro class.” Big writers are getting invited to nice studios to learn about stem separation and customization. “They’re not dumb. They’re trying to get us to adopt.”
DESPITE SOME CONCERNS, David Baron is not among the people who thinks the sky is falling. He compares AI to pottery. “There are potters that make beautiful pieces that you buy for a lot of money, and they’re gorgeous and everyone’s different,” he says. “But you can also buy pottery at Target that’s all exactly the same and it’s fine. There’s nothing wrong with either one of them, and they both exist in the world together.”
Michelle Lewis’ priority is making sure songwriters have a voice as labels and AI companies get closer to resolving the many copyright issues. “What we learned from streaming is, if you’re not at the table during those conversations of how it gets split, you’re on the menu,” she says.
Young Guru leads Jay-Z’s Roc Nation school for aspiring music professionals at Long Island University in Brooklyn. He tells young people to focus on something AI can’t replace: “physical human interaction.” He tells students not to email him. “Meet me in person and then let’s talk.”
Interestingly, it’s young people who have the most hesitation about AI, according to some sources: In a survey of music producers by the sample library Tracklib, the youngest age group — respondents in their 20’s — had the most negative opinions of AI. Lauren Christy says that her early-twentysomething daughters, who are also musicians, “feel very strongly about how plastic it all sounds,” she says. They’re “having a very punk reaction about these technologies.”
“The new thing will be out-of-tune vocals done on an acoustic guitar, you know what I’m saying?” she says. “We’re going to zag instead of zig.”
“[AI] might become smart enough one day where it’ll mimic human imperfections, but I just don’t ever see it,” said Charlie Puth. “I see us humans getting smarter.”

