LinkedIn Suspends AI Data Processing in United Kingdom Amid Privacy Concerns Raised by ICO
The Information Commissioner’s Office (ICO) in the United Kingdom has affirmed that LinkedIn, the professional social networking platform, has halted the processing of users’ data in the nation to educate its artificial intelligence (AI) models.
“We are gratified that LinkedIn has taken into account the concerns we expressed regarding its strategy for training creative AI models with details linked to its U.K. users,” stated Stephen Almond, executive director of regulatory risk, according to a declaration.
“LinkedIn has indicated that it has temporarily suspended such model training until further discussions with the ICO,” he added.
Almond also mentioned that the ICO plans to monitor closely companies that provide generative AI capabilities like Microsoft and LinkedIn to guarantee they have appropriate protections in place and take actions to safeguard the information rights of U.K. users.
This development follows after the company, owned by Microsoft, acknowledged employing its AI model on users’ data without obtaining their explicit approval as part of an revised privacy policy that commenced on September 18, 2024, as reported by 404 Media noted.
“Presently, we are not activating training for creative AI based on member data from the European Economic Area, Switzerland, and the United Kingdom, and will refrain from offering this feature to members in those regions until further notice,” stated LinkedIn.
In a separate FAQ, the company also pointed out its aim to reduce personal data in datasets used for model training, utilizing technologies to redact or eliminate personal data from the training dataset.
Individuals residing outside Europe have the option to opt-out by visiting the “Data privacy” section in account settings and disabling the “Data for Generative AI Improvement” setting.
“Opting out means that LinkedIn and its subsidiaries will cease utilizing your personal data or content on LinkedIn for future model training but will not impact training that has already occurred,” LinkedIn clarified.
The decision by LinkedIn to silently include all users for AI model training arrived shortly after Meta confirmed its practice of scraping non-private user data for similar purposes as early as 2007. The social media firm has since resumed training on U.K. users’ data.
In the previous year, Zoom scrapped its plans to utilize customer content for AI model training following concerns over the potential use of that data due to modifications in the app’s terms of service.
The latest development emphasizes the increasing scrutiny on AI, particularly regarding the utilization of individuals’ data and content to educate extensive AI language models.

It is also in response to the report published by the U.S. Federal Trade Commission (FTC) pointing that major social media and video streaming platforms have indulged in extensive user surveillance with weak privacy controls and inadequate safeguards, notably for children and teenagers.
This personal information of users is frequently matched with data derived from artificial intelligence, tracking pixels, and third-party data brokers to generate comprehensive consumer profiles before being commercialized by selling to other interested buyers.
“The companies amassed and could forever retain large sums of data, including details from data brokers and about both users and non-users of their platforms,” affirmed the FTC in a statement, denouncing their data collection, minimization, and retention methods as “grossly deficient.”
“Several companies participated in extensive data sharing that raises serious doubts regarding the adequacy of the companies’ data management controls and supervision. Some companies failed to delete all user data upon receiving deletion requests from users.”

