Spotify is beta testing a new "Artist Profile Protection" feature that allows artists to review releases before they go live on their profiles. The move addresses a problem that has plagued music streaming for years: unauthorised or misattributed content appearing under artists' names without their consent.
"Music has been landing on the wrong artist pages across streaming services, and the rise of easy-to-produce AI tracks has made the problem worse," Spotify wrote in a blog post. This isn't a new phenomenon. In the past 12 months alone, Spotify has removed over 75 million spammy tracks from the platform.
The problem runs deeper than simple confusion. Low-quality, mass-produced AI songs are increasingly appearing on Spotify, often produced in large quantities and designed to exploit platform algorithms or royalty systems, raising concerns about quality control and fairness for human musicians. Spotify uses a pro-rata model where all subscription revenue is pooled and divided by total streams; every stream that goes to an AI-generated track takes money directly from real artists.
How the new protection works
If artists enable Artist Profile Protection, they'll receive an email notification when music is delivered to Spotify with their name attached, and from there they can approve or decline the request. Artists in the beta will see the feature in their "Spotify for Artists" settings on desktop and mobile web.
To avoid creating friction with legitimate distribution partners, artists will be assigned an artist key: a unique code to share with trusted providers, meaning the release is automatically pre-approved and goes live as normal.
Though Spotify has encouraged subscribers and artists to use its reporting resources to flag AI-generated music, this marks the first time where the company gives the artist an active role in preventing AI fraud as well as avoiding common mix-ups in the release process.
Broader context for the platform
This feature fits into a larger push by Spotify to clean up its ecosystem. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push "slop" into the ecosystem, and interfere with authentic artists working to build their careers; that kind of harmful AI content often attempts to divert royalties to bad actors.
The company has already taken several steps to address the issue. It follows the streaming giant's September 2025 measures to tackle AI slop in order to protect real artists, including a spam filter system and restrictions on voice impersonation. Spotify introduced a new impersonation policy that clarifies how it handles claims about AI voice clones; vocal impersonation is only allowed in music on Spotify when the impersonated artist has authorised the usage.
Some commentators welcome the shift toward artist control, though questions remain about whether opt-in systems go far enough. Though technically an opt-in system, even if an artist enables the feature and never uses it, music still won't be released unless it gets approved, so there's an added layer of protection. Throughout the beta trial, Spotify will collect feedback from artists using Artist Profile Protection before rolling it out to all artists.
The strategic calculation here reflects genuine tension: music distribution has been democratised by technology, making it easier for independents to reach global audiences, but that same ease of access has created space for spam farmers and AI operators to flood platforms with low-effort content. Artist Profile Protection attempts to preserve the benefits of open distribution while giving creators a practical tool to defend their identity and earnings.