Safeguarding Trained Models in Privacy-Preserving Federated Learning

This article is part of a collection on privacy-ensuring federated learning.

This article is part of a collection on privacy-ensuring federated learning. This series is a joint effort between NIST and the UK government’s Unit for Responsible Technology Adoption (RTA), previously known as the Centre for Data Ethics and Innovation. To explore further and access all previous posts, visit NIST’s Privacy Engineering Collaboration Space or RTA’s blog.    

Our previous discussions in this series delved into strategies for input confidentiality within privacy-guarding federated learning concerning horizontally and vertically separated data. To establish a comprehensive privacy-protecting federated learning system, these methods need to be integrated with an approach for output confidentiality, which restricts the amount of information that can be deduced about individuals in the training data once the model has been trained.

As discussed in the subsequent part of our discourse on privacy infringements in federated learning, trained models have the potential to expose substantial details about their training data, including entire images and text excerpts.

Educating with Differential Privacy

The most stringent form of output confidentiality known is differential privacy. Differential privacy, a structured privacy framework applicable in numerous scenarios, introduces random noise to the model during training to safeguard against privacy threats. This random noise prevents the model from retaining specific details from the training data, ensuring that the training data cannot be extracted from the model afterward. For instance, Carlini et al. demonstrated that sensitive training data such as social security numbers could be extracted from trained language models, but training with differential privacy effectively thwarted this breach.

Differential Privacy for Privacy-Safeguarding Federated Learning

In centralized training, where data is centralized on a server, the server can conduct training and add noise for differential privacy in a single step. However, in privacy-preserving federated learning, it becomes challenging to determine who should introduce the noise and the manner in which it should be introduced.

 FedAvg with differential privacy, for privacy-preserving federated learning on horizontally partitioned data. Modifications to the FedAvg approach are highlighted in red. These modifications add random noise to each update, so that the aggregated noise samples are sufficient to ensure differential privacy for the trained global model.

FedAvg with differential privacy, for privacy-protecting federated learning on horizontally partitioned data. Noteworthy modifications to the FedAvg method are highlighted in red, introducing random noise to each update, guaranteeing adequate noise samples for maintaining differential privacy in the trained global model.

Acknowledgment:

NIST

For privacy-preserving federated learning on horizontally separated data, Kairouz et al. propose a version of the FedAvg strategy outlined in our fourth publication. In this approach, each participant conducts local training, adds slight random noise to their model update, and then merges it with other participants’ updates. If each participant correctly integrates noise into their update, the final merged model will contain adequate noise to maintain differential privacy. This technique guarantees output privacy, even in the presence of a malevolent aggregator. The Scarlet Pets team leveraged a variant of this methodology in their triumphant solution for the UK-US PETs Prize Challenges.

Securing differential privacy in the case of vertically partitioned data poses complexity. Noise required for differential privacy cannot be introduced before entity alignment, as it would disrupt the accurate matching of data attributes. Instead, noise needs to be injected post-alignment, either by a trusted entity or via techniques like homomorphic encryption or multiparty computation.

Training Exceptionally Accurate Models with Differential Privacy

The random noise essential for differential privacy can impact model precision. Increased noise generally enhances privacy but diminishes accuracy. This balance between accuracy and privacy is commonly referred to as the privacy-utility tradeoff

For certain types of machine learning models such as linear regression models, logistic regression models, and decision trees, managing this tradeoff is straightforward – the previously detailed strategy often works effectively in training remarkably accurate models with differential privacy. In the context of the UK-US PETs Prize Challenges, both the PPMLHuskies and Scarlet Pets teams adopted similar tactics to train highly precise models under differential privacy.

Conversely, training neural networks and deep learning models with differential privacy is more challenging due to the models’ substantial size necessitating more noise to ensure privacy, consequently reducing accuracy. Although these models were not part of the UK-US PETs Prize Challenges, their relevance is escalating across all generative AI applications, encompassing extensive language models.

Recent research findings indicate that models pre-trained on publicly available data (sans differential privacy) and subsequently fine-tuned with differential privacy can yield accuracy levels comparable to models trained without differential privacy. For instance, Li et al. have demonstrated that pre-trained language models can be fine-tuned with differential privacy to reach almost identical accuracy levels as non-differentially private models. These outcomes suggest that in domains where publicly available data can be utilized for pre-training – notably language and image recognition models – achieving privacy and utility in privacy-protecting federated learning is plausible.

This approach does not offer privacy guarantees for the public data employed during pre-training, necessitating adherence to relevant privacy and intellectual property regulations (the legal and ethical contemplations on this topic transcend the scope of this blog series).

Upcoming

In the subsequent post, we will be addressing the implementation hurdles in deploying privacy-preserving federated learning in real-world scenarios.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.