AI-secure/FLBenchmark-toolkit
Federated Learning Framework Benchmark (UniFed)
This platform helps AI researchers and practitioners evaluate and compare different federated learning frameworks. It takes your dataset and the federated learning framework you want to test, then outputs performance metrics like accuracy or AUC across various data modalities (image, text, tabular, medical, sensor data) and task types. The primary user is someone building or evaluating federated learning models, often in a research or applied AI engineering context.
No commits in the last 6 months.
Use this if you need to systematically benchmark different federated learning frameworks under various real-world scenarios, such as cross-device or cross-silo data configurations.
Not ideal if you are looking for a simple tool to train a single federated learning model without needing to compare multiple frameworks or deployment scenarios.
Stars
49
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-secure/FLBenchmark-toolkit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
flwrlabs/flower
Flower: A Friendly Federated AI Framework
JonasGeiping/breaching
Breaching privacy in federated learning scenarios for vision and text
anupamkliv/FedERA
FedERA is a modular and fully customizable open-source FL framework, aiming to address these...
zama-ai/concrete-ml
Concrete ML: Privacy Preserving ML framework using Fully Homomorphic Encryption (FHE), built on...
p2pfl/p2pfl
P2PFL is a decentralized federated learning library that enables federated learning on...