Read this in other languages: English, 简体中文.
Federated learning (FL), proposed by Google at the very beginning, is recently a burgeoning research area of machine learning, which aims to protect individual data privacy in distributed machine learning process, especially in finance, smart healthcare and edge computing. Different from traditional data-centered distributed machine learning, participants in FL setting utilize localized data to train local model, then leverages specific strategies with other participants to acquire the final model collaboratively, avoiding direct data sharing behavior.
To relieve the burden of researchers in implementing FL algorithms and emancipate FL scientists from repetitive implementation of basic FL setting, we introduce highly customizable framework FedLab in this work. FedLab provides the necessary modules for FL simulation, including communication, compression, model optimization, data partition and other functional modules. Users can build FL simulation environment with custom modules like playing with LEGO bricks. For better understanding and easy usage, FL algorithm benchmark implemented in FedLab are also presented.
The implementations of FL algorithm using FedLab can be found in FedLab Benchmark repository.
You're welcome to contribute to this project through Pull Request.
- By contributing, you agree that your contributions will be licensed under Apache License, Version 2.0
- Docstring and code should follow Google Python Style Guide: 中文版|English
- The code should provide test cases using
unittest.TestCase
Please cite FedLab in your publications if it helps your research:
@article{smile2021fedlab,
title={FedLab: A Flexible Federated Learning Framework},
author={Dun Zeng, Siqi Liang, Xiangjing Hu and Zenglin Xu},
journal={arXiv preprint arXiv:2107.11621},
year={2021}
}
Contact the FedLab development team through Github issues or email:
- Dun Zeng: [email protected]
- Siqi Liang: [email protected]