Deezer reports data showing 44% of new music is ai-generated

Deezer says 44% of new music is AI-generated, a striking data point that raises fresh questions about fraud, labels, royalties, and what listeners hear next.

44% of new music is AI-generated, and Deezer has the numbers

A new song goes live, then another, then thousands more before most listeners finish breakfast. That flood now includes a huge share of synthetic tracks. According to Deezer, 44% of new music is AI-generated, with nearly 75,000 AI-made tracks uploaded every day, based on figures published by the company on April 20 in Paris.

The scale matters because the upload pipeline shapes everything downstream, discovery, payouts, moderation, and trust. Deezer says this equals more than 2 million AI-generated songs per month. The company also says it is the only streaming platform openly tagging fully AI-generated music, a move that turns an abstract debate into a measurable product policy.

Those numbers did not appear overnight. Deezer says its patent-pending detection system launched in January 2025, and the tracked volume of synthetic songs rose from about 10,000 per day to 75,000 in a little over a year. That trend suggests generative music tools are becoming cheaper, faster, and easier to use at industrial scale.

Named tools matter here. Deezer says its system can identify output linked to major music generators such as Suno and Udio, while also expanding toward more general detection without relying on one fixed training dataset. This is an important distinction because creators can switch models quickly, and fraud operators usually do.

There is also a visibility gap for users. In a 2025 Ipsos survey commissioned by Deezer across eight countries, including the USA, UK, France, Germany, Brazil, Canada, Japan, and the Netherlands, 97% of respondents could not reliably distinguish AI songs from human-made tracks in a blind test. At the same time, 80% said fully AI-generated songs should be clearly labeled.

That combination explains why this story goes beyond novelty. If most people cannot tell the difference, labeling becomes less of a cosmetic feature and more of a transparency layer. The next issue is whether those uploads are actually being heard, or whether another system is driving the traffic.

Why Deezer says most AI streams are fraudulent

The most alarming part of the data is not just volume. It is intent. Deezer says fully AI-generated tracks account for only 1% to 3% of total streams on the platform, yet 85% of those streams are flagged as fraudulent and demonetized. In other words, the listening share remains small, but the abuse signal is high.

See also  Historical Evolution Of NLP Technologies

That claim points to a familiar pattern in platform security. When production becomes cheap and distribution is automated, bad actors test the edges of the payout system. A bot farm does not care whether a song moves anyone emotionally. It only needs enough fake plays, enough accounts, and enough scale to extract money from royalties.

Deezer says it removes detected AI tracks from algorithmic recommendations and keeps them out of editorial playlists. It has also stopped storing hi-res versions of AI tracks. That might sound like a technical detail, but it is a cost-control and incentive-control decision. If a platform reduces storage, visibility, and monetization advantages for synthetic spam, the business case for fraud gets weaker.

This is where a cybersecurity lens helps. Abuse in music streaming now looks closer to abuse in ad tech, social media, and fake app installs. The same logic behind traffic manipulation applies, inflate activity, blend in with legitimate behavior, and exploit automated payment systems. Readers who follow broader synthetic media trends will recognize similar patterns in synthetic influencers and disputes over identity, ownership, and commercial value.

Deezer also says it began licensing its AI detection technology in January, opening the door for labels, distributors, and other music services to use the system. That matters because fraud rarely stays inside one platform. Once a loophole closes in one place, abusive uploads tend to migrate elsewhere.

Key detailWhy it matters
75,000 AI tracks uploaded dailyShows industrial-scale generation, not occasional experimentation
44% of daily uploadsSignals that synthetic music is nearing half of incoming catalog flow
1% to 3% of streamsConsumption remains limited despite massive supply
85% flagged as fraudulentIndicates strong evidence of payout abuse and stream manipulation

A practical takeaway emerges from these figures.

  • Detection now matters as much as content hosting.
  • Labeling is becoming a user trust issue, not just a policy choice.
  • Demonetization is central if platforms want to reduce incentives.
  • Cross-industry tools may become necessary as fraud shifts between services.

The real pressure point, though, is not just spam. It is what this surge means for artists, rights holders, and future revenue.

Artists, royalties, and the wider music business response

Deezer’s figures land in a market already worried about dilution. A joint study by CISAC and PMP Strategy warned that nearly 25% of creators’ revenues could be at risk by 2028, a potential hit of up to €4 billion. That estimate predates some of the latest acceleration in generative audio, which makes the platform data even harder to dismiss.

See also  After the AI Hype Fades: How Humanity Can Reclaim Control – Insights by Rafael Behr

Alexis Lanternier, Deezer’s CEO, framed the issue around artist rights, fairness, and transparency. The company’s position is straightforward, synthetic music is no longer marginal, and the rest of the industry should adopt stronger safeguards. Based on Deezer’s reported design direction and its past strategy, the goal is clear, reduce payment dilution before the upload flood turns into a structural payout problem.

For musicians, the concern is not only direct copying. It is discoverability. If tens of thousands of machine-made tracks enter distribution every day, search results, recommendation systems, and playlist slots all become more contested. Even when AI tracks generate few genuine listens, they still create noise in catalog management and moderation pipelines.

The issue reaches beyond music. Hollywood, publishing, and creator platforms are facing related disputes over licensing, consent, and attribution. DualMedia has tracked adjacent questions in pieces on how creators can use AI without losing the human touch and the legal pressure surrounding likeness and ownership in entertainment. Music is simply where the industrialized upload model is easiest to measure.

There is one more reason Deezer’s data stands out. The company says more than 13.4 million AI tracks were detected and tagged on its service during 2025. That is not a speculative forecast. It is an operational signal from a major streaming platform with a defined detection stack and two patent filings dating back to December 2024.

If the numbers hold across competitors, the music business may need a shared standard for disclosure, auditing, and payment eligibility. Without that, each platform will keep building its own patchwork while bad actors test whichever service looks easiest to game.

The next question is the one most readers care about, what changes for listeners, labels, and platforms in the months ahead?

Frequently asked questions

How does Deezer detect AI-generated songs?

Deezer says its detection technology can identify output from major generative music systems such as Suno and Udio, and it is being expanded to work more broadly. The company filed two patent applications in December 2024 tied to methods for spotting signatures of synthetic audio.

Are AI-generated tracks popular with real listeners on Deezer?

Not at large scale, based on Deezer’s published figures. The platform says fully AI-generated music represents only 1% to 3% of total streams, even though it makes up 44% of daily uploads.

Why is fraud such a major part of this story?

Deezer reports that 85% of streams tied to fully AI-generated tracks were flagged as fraudulent in 2025 and removed from royalty payments. That suggests many uploads are designed to exploit payout systems rather than reach fans.

See also  Google Search Launches Gemini 3 Flash Worldwide

Do listeners want AI music to be labeled?

Yes, according to the Ipsos study commissioned by Deezer in November 2025. The survey found that 80% of respondents wanted fully AI-generated music to be clearly labeled, and 73% wanted to know when a service recommends it.

What to watch next

The headline number, 44% of new music is AI-generated, is hard to ignore, but the more revealing split is between supply and real consumption. Uploads are exploding, while audience demand remains comparatively low. That gap tells a story about automation, abuse, and platform economics more than listener preference.

Watch for three developments next. First, whether rival streaming services adopt visible labels and stronger moderation. Second, whether licensing deals for detection systems spread beyond Deezer. Third, whether regulators and rights groups push for common disclosure rules, especially as synthetic media expands across video, voice, and advertising. Similar pressures are already visible in broader AI reporting, including coverage of AI research shaping industries and the infrastructure used to keep online systems safer.

For now, Deezer has offered something rare in the AI content debate, operational data tied to a clear product response. In a market full of vague claims, actual measurements carry weight. That is why this report may end up influencing not just streaming policy, but how the creative economy defines authenticity in the next phase of AI.

Want more tech and innovation coverage like this? DualMedia Innovation News tracks the technology shifts that actually matter, from AI to foldable hardware to the next wave of consumer products.