
LinkedIn illegally used private messages to train artificial intelligence models, according to a California class action lawsuit seeking damages for “millions” of users.
The filing, made by Alessandro de la Torre “on behalf of all others similarly situated”, accuses the professional social networking site of “unlawfully disclosing its premium customers’ private messages to third parties”.
According to the lawsuit, LinkedIn “quietly” introduced a privacy setting in August 2024 which automatically opted premium subscribers into the use of their personal data for training generative AI models by the company and its affiliates.
The class action claim further accuses the Microsoft-owned company of changing its privacy policy in September after the data-sharing was noticed and caused a “harsh public backlash”. It describes the move as a violation of LinkedIn’s own privacy terms and part of a “pattern of attempting to cover its tracks”.
“This behaviour suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimise public scrutiny and potential legal repercussions,” De la Torre’s lawsuit said.
Advertisement
A LinkedIn spokesman said: “These are false claims with no merit.”
The lawsuit is seeking $1,000 per user for alleged violations of the Stored Communication Act, as well as further damages for breach of contract and violation of Californian competition law.
• Microsoft boss: AI isn’t scary — it can be an economic game-changer
The filing argues that LinkedIn premium members share “potentially life-altering” business information on the platform’s InMail feature and that this sensitive information was used improperly.
LinkedIn has more than a billion users worldwide but the company does not disclose how many of those are subscribers, paying upwards of $29.99 per month for access to exclusive features.
Advertisement
De la Torre said that he personally used LinkedIn’s messaging service for “discussions about potential financing for startups, job-seeking efforts, and attempts to reconnect with former colleagues”.
He claims the release of this information to third parties for AI training would “jeopardise [his] professional relationships, compromise business opportunities, and negatively impact his career prospects”.
• I used AI voice cloning to steal £250. It took 15 minutes
On September 20, the platform suspended data sharing in the UK for training AI models after the Information Commissioners Office raised concerns about the practice. The practice was also halted in Switzerland and the European Economic Community.
LinkedIn is not the only platform to have recently updated its policy on user data being employed in AI model training. Meta Platforms, the owner of Facebook and Instagram, announced in September that it would begin training AI models on public posts on its platforms from British users.
Advertisement
The company had previously paused data collection in the UK following “regulatory feedback” and relaunched its program with additional transparency measures. Private messages and content from users under the age of 18 are not used to train Meta’s models.