Under new rules Spotify announced today, an AI “band” like the now-infamous Velvet Sundown will still be allowed on the service, but would be encouraged to label itself properly from the start. Overall, Spotify has zero intention of eliminating AI-generated music from its service, execs said Tuesday, in a press conference announcing its new guidelines. At the same time, the company said it’s waging war against a flood of low-quality AI content, removing more than 75 million “spammy” tracks in the past 12 months alone.
There’s no question that in the wake of the rise of services such as Suno — which allow near-instant generation of new songs — AI-generated music is deluging streaming services. Spotify competitor Deezer has said that approximately 28 percent of daily uploads are fully AI-generated music, though those tracks account for a mere 0.5 percent of actual streams. But even as Spotify aims to curb the impact of that onslaught, they’re signaling that AI music is here to stay. “We’re not here to punish artists for using AI authentically and responsibly,” said Charlie Hellman, Spotify’s VP global head of music product. “We hope that artists’ use of AI production tools will enable them to be more creative than ever.”
Spotify is most concerned about “mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop,” according to a blog post from the company. Accordingly, it’s rolling out a new spam filter to flag uploaders engaging in those practices, and thus “help prevent spammers from generating royalties that could be otherwise distributed to professional artists and songwriters.” The company will stop short of removing those tracks, though, instead simply making them ineligible for recommendation by the streamer’s algorithm. The company will still have no rules in place against boosting AI-generated songs in general.
The platform will encourage, but apparently not mandate, that artists label their AI usage via a new industry standard developed through DDEX — a long-standing non-profit that creates technical standards for song metadata across platforms. The idea is that artists will specify their precise uses of generative AI, ranging from fully prompt-generated songs to human-made songs with AI-tweaked lyrics. The approach treats AI use as “a spectrum, not a binary,” said Sam Duboff, Spotify’s global head of marketing and policy, music business.
The new policies also include more explicit bans on unauthorized AI voice clones and deepfakes. “Some artists may choose to license their voice to AI projects—and that’s their choice to make,” Duboff said. “Our job is to do what we can to ensure that the choice stays in their hands.” At the same time, the company says it’s aiming to be more vigilant about “profile mismatches,” where fraudsters upload content under the name of real artists, often famous ones.
This summer, the Velvet Sundown, a fake AI-generated band that initially didn’t acknowledge its nature, amassed over a million monthly listeners, while appearing to benefit from algorithmic promotion on Spotify. The band eventually admitted in an updated bio that it was “a synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence.” Duboff suggested the “band” might have gained much less notoriety if it was properly labeled from the start. “I think the kind of news cycle, the fan interest, would’ve been really different,” he said.