warisgill/TraceFL

TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients responsible for global model predictions, achieving 99% accuracy across diverse datasets (e.g., medical imaging) and neural networks (e.g., GPT).

21
/ 100
Experimental

TraceFL helps machine learning engineers and researchers understand which clients contribute most to a global model's predictions in a federated learning setup. It takes a trained global model and, for any given prediction, identifies the specific client (e.g., a hospital's dataset) that was most influential. This allows the practitioner to debug issues, assess client contributions, and enhance model reliability without ever seeing raw client data.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher working with federated learning models and need to understand the source of a model's predictions for debugging, accountability, or quality control.

Not ideal if you are working with traditional centralized machine learning models, as its core value is in tracing contributions across distributed clients.

federated-learning model-debugging machine-learning-interpretability client-accountability distributed-ai
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

MIT

Last pushed

Nov 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/warisgill/TraceFL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.