Meta Puts a Halt to AI Training on EU User Information Amid Privacy Concerns
Meta announced on Friday a pause in its attempts to educate the organization’s vast language models (LLMs) using public content exchanged by mature users on Facebook and Instagram within the European Union after being requested by the Irish Data Protection Commission (DPC).
The corporation expressed dissatisfaction with the need to suspend its AI strategies, stating that it had taken feedback from regulators and data protection authorities in the area into consideration.
The primary concern involves Meta’s proposal to leverage personal information for training its AI algorithms without explicitly requesting users’ approval, instead depending on the legal justification of ‘Legitimate Interests‘ for handling first and third-party data in the geographical zone.
These adjustments were anticipated to be implemented by June 26, prior to which the business indicated that consumers could choose to decline having their information employed by submitting a request “if they so desire.” Meta is already making use of user-generated content to educate its AI in other regions such as the U.S.
“This signifies a regression for European advancement, rivalry in AI advancement, and additional postponement in delivering the advantages of AI to Europeans,” Stefano Fratta, Meta’s global engagement director of privacy strategy, stated.
“We maintain a high level of confidence in the fact that our strategy adheres to European legislation and guidelines. AI training is not exclusive to our services, and we are more transparent than many counterparts in our sector.”
Moreover, it underlined the challenge of launching Meta AI in Europe without being capable of training its AI models on locally gathered data that includes varied languages, geographical locations, and cultural references, pointing out that any other approach would result in offering a substandard experience.
In addition to collaborating with the DPC to introduce the AI tool in Europe, it noted that the delay would provide an opportunity to address requests from the U.K. regulator, the Information Commissioner’s Office (ICO), ahead of initiating the education process.
“In order to maximize the potential of generative AI and the advantages it presents, it is critical that the public can have confidence that their privacy entitlements will be upheld from the beginning,” Stephen Almond, executive director of regulatory risk at the ICO, mentioned.
“We will persist in overseeing key developers of generative AI, including Meta, to evaluate the safeguards they have put in place and safeguard the data rights of U.K. individuals.”
This development occurs as Austrian non-profit noyb (none of your business) lodged a complaint in 11 European nations accusing the violation of GDPR privacy laws in the area by collecting user information to develop unspecified AI technologies and share them with any external group.
“Essentially, Meta claims that it has the authority to utilize ‘any data from any origin for any objective and make it accessible to any entity globally,’ as long as the operation is conducted via ‘AI technology,'” noted Max Schrems, founder of noyb. “This clearly contradicts GDPR compliance.”
“Meta does not clarify the intended use of the information, thus it could be a basic chatbot, extremely assertive personalized advertising, or even a combat drone. Also, Meta indicates that user information could be provided to any ‘external entity’ – which encompasses any entity worldwide.”
Noyb also censured Meta for making misleading statements and representing the postponement as a “collective penalty,” highlighting that the General Data Protection Regulation (GDPR) allows personal data processing under the condition that users provide explicit opt-in approval.
“Meta could theoretically introduce AI technology in Europe if it simply took the initiative to obtain consent from individuals, but it seems that Meta is deliberately avoiding seeking opt-in approval for any processing,” they stated.


