Skip to content
/ PFLM Public

Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guara…

Notifications You must be signed in to change notification settings

JiangChSo/PFLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PFLM

This repository is for our Information Sciences 2021 paper "PFLM: Privacy-preserving Federated Learning with Membership Proof". Detailed instructions are described as follows.

Install the required packages

virtualenv -p /usr/bin/python3 venv
source venv/bin/activate
pip install -r requirements

Install the packages pypbc

  1. Adjust the appropriate gradient dimensions NB_CLASSES in client.py and server.py.

  2. Adjust the dropout in server.py for experimental purposes. For example, 10 represents 10 percent of users dropping out of PFLM.

  3. Adjust the timeouts of the five rounds in server.py to the appropriate RTT (Following the two steps above).

  4. Execute the sh file. For example,

bash nodrop.sh

Note that the data recorded in the experiment is saved in BENCHMARK. Note that the figures are in the folder Plot.

About

Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guara…

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published