AI-secure/FLBenchmark-toolkit

Federated Learning Framework Benchmark (UniFed)

36
/ 100
Emerging

This platform helps AI researchers and practitioners evaluate and compare different federated learning frameworks. It takes your dataset and the federated learning framework you want to test, then outputs performance metrics like accuracy or AUC across various data modalities (image, text, tabular, medical, sensor data) and task types. The primary user is someone building or evaluating federated learning models, often in a research or applied AI engineering context.

No commits in the last 6 months.

Use this if you need to systematically benchmark different federated learning frameworks under various real-world scenarios, such as cross-device or cross-silo data configurations.

Not ideal if you are looking for a simple tool to train a single federated learning model without needing to compare multiple frameworks or deployment scenarios.

federated-learning machine-learning-evaluation distributed-AI model-benchmarking privacy-preserving-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

49

Forks

6

Language

Python

License

Apache-2.0

Last pushed

Jun 14, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-secure/FLBenchmark-toolkit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.