AI and Bots Allegedly Used to Fraudulently Boost Music Streams

AI and Bots Allegedly Used to Fraudulently Boost Music Streams

 

The music streaming industry, valued at over $20 billion globally, is facing a growing threat as AI-generated music and automated bots are being exploited to inflate stream counts and siphon royalties—posing serious challenges to artists, platforms, and the integrity of digital music metrics.

Deezer: Battling a Surge in AI-Driven Stream Fraud

A recent report by French streaming service Deezer paints a stark picture: while AI-generated music comprises just 0.5% of total streams, up to 70% of those are fraudulent, inflated by bots exploiting the system for royalty gains. Despite the small share of AI content overall, the manipulation is significant enough to warrant Deezer investing heavily in detection tools that can identify AI music from prominent models like Suno and Udio. These tracks are excluded from royalty payments and algorithmic recommendations, in a bid to protect legitimate artists.

To further combat this issue, Deezer has begun labeling AI-generated content clearly on its platform. Users now see warnings indicating tracks that include AI-generated music—an effort made not just to curb fraud, but to demand transparency and maintain trust in streaming data.

The Michael Smith Case: Real Money, Fake Music

In a landmark indictment, Michael Smith, a musician from North Carolina, became the first individual in the U.S. criminally charged with streaming fraud aided by AI. Prosecutors allege that from 2017 to 2024, Smith generated hundreds of thousands of AI-made tracks and used up to 10,000 bot accounts to stream them billions of times across platforms like Spotify, Apple Music, and Amazon Music. This elaborate scheme netted more than $10 million in illegitimate royalties, depriving genuine artists of their rightful earnings.

Smith’s actions highlight not only the complexities of detecting AI-driven fraud but also how the generative power of AI can be weaponized at scale. One of his collaborators, initially involved in promotion, later became a whistleblower, alerting authorities to the scheme.

Broader Industry Implications and Ethical Concerns

This issue isn’t isolated. A growing number of independent and deceased artists have fallen victim to AI impersonations. Fake tracks have been uploaded in their names without consent, leading to emotional and financial harm. For example, Spotify pages of deceased artists like Blaze Foley have featured songs falsely attributed to them—termed “AI schlock bot” creations by those managing the artist estates.

Industry observers warn that the problems go deeper: fraudulent streams bleed revenue from legitimate artists, distort platform charts, and challenge the role of streaming as a fair metric of popularity. With AI making it easier to churn out content in bulk, these risks are amplifying.

What’s at Stake: Integrity, Royalties, and Innovation

This evolving landscape raises several urgent issues:

  • Financial exploitation: Fraudsters are tapping into the royalty system, diverting funds from real artists.

  • Transparency: Without clear indicators, listeners, platforms, and rights holders face confusion over what’s authentic.

  • Regulation and detection: Platforms must invest in AI-powered detection systems—and continuously adapt them—as the actors behind the fraud evolve.

  • Ethics of AI: Balancing AI innovation with respect for artistry, consent, and fair economic outcomes is becoming a critical conversation.

This unfolding crisis reveals the dual-edged nature of AI in creative industries—while it can empower artists, it can also be misused for fraudulent gain. As the music world grapples with these threats, the response from platforms, regulators, and creators will shape the next chapter in how music is consumed and valued online.