Universal Music Takes Aim At Fake AI Tracks Fetching Thousands

Photo Credit: Jp Valery

Fraudsters are utilizing AI to sell unauthorized soundalike tracks, billed as genuine pre-releases from commercially prominent acts, for as much as a staggering $30,000 a pop, according to Universal Music Group (UMG).

The Big Three label shed light upon the little-discussed and decidedly lucrative AI practice in a recent contribution to the World Intellectual Property Organization (WIPO). Entitled “Artificial Intelligence in the Music Industry: Its Use by Pirates and Right Holders” and penned specifically by UMG VP of global content protection Graeme Grant, the submission retreads multiple already-introduced positions on and explanations about AI.

(UMG is itself capitalizing on and investing in artificial intelligence in areas including content tagging and IP-theft prevention, Grant reiterated, while “AI platforms are being illicitly trained” with protected media. “To date we have detected and removed over 200,000 listings of counterfeit / unauthorized merchandise valued at over USD$45m+,” Grant relayed of UMG’s use of AI in pinpointing unapproved third-party merch offerings.)

Separately, however, the roughly 30-month Universal Music exec also disclosed several new details on the AI front. Since August of this year, for instance, “the number of AI-generated uploads to user-generated content platforms implicating our rights has grown 175 per cent,” per Grant, with 47 percent “of the notices sent thus far” having been triggered by master-recording detections.

The remaining 53 percent of notices resulted from compositional, trademark, and right of publicity claims, Grant indicated.

Shifting to the initially mentioned soundalike “pre-releases,” fraudsters are increasingly “using AI to claim they have pre-release tracks which they then make available for sale,” Grant maintained.

“These individuals typically upload brief snippets of AI-generated tracks impersonating UMG’s artists[’] voices to popular leak sites, falsely claiming to have obtained the tracks directly from the artists through illicit means such as hacking, phishing or misrepresentation,” elaborated the former IFPI higher-up.

“Believing these tracks to be authentic, users often engage in ‘group buys’, pooling their resources to meet the fraudster’s inflated asking price, which can range from USD5,000 to 30,000. The users are often unaware that the track in question was not created by the artist, but rather by AI technology,” he proceeded.

While Grant opted against identifying the precise overall numbers associated with the alleged scams, the massive price range appears to underscore the relative quality of the works in question. Given the unlawful efforts’ resulting appeal to superfans – and the apparent lengths they’re willing to go in order to obtain the tracks – it’ll be worth closely monitoring attempts to stomp out the practice in the new year.

Building upon the point, Grant’s comments may provide context as to the direction of UMG’s AI-related policy stances and goals during 2024.

“Once these models are fully trained, they are often disseminated through social communities on platforms like Discord and Reddit, and repositories such as GitHub and Hugging Face,” Grant wrote in the contribution, following an in-document image that appears to show a Discord server dedicated to AI-vocal systems.

“They are often accompanied by complete and comprehensive tutorials on how to employ these models to generate new, derivative works,” he continued, highlighting specific examples thereafter.

Interestingly, despite the scope of the problem and Universal Music’s support for legislation like the EU’s Artificial Intelligence Act, the major label doesn’t believe that copyright law requires fundamental change to meet the challenges of AI.

“In general, UMG is of the opinion that current copyright legislation, if interpreted, applied, and enforced correctly, does not need to change,” Grant communicated. “In selected territories, however, additional protection of personal rights (i.e., voice and likeness) may be necessary.”

This post was originally published on this site