TsingZ0/FedKTL

CVPR 2024 accepted paper, An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning

33
/ 100
Emerging

This project helps machine learning researchers improve the performance of many small, diverse AI models, like those on individual devices, by leveraging a powerful centralized image generator. It takes a pre-trained image generator (like Stable Diffusion) and various client datasets as input, and outputs small, personalized models that perform better on their specific tasks. This is ideal for AI researchers working with distributed model training and privacy-sensitive data.

No commits in the last 6 months.

Use this if you are a machine learning researcher focused on federated learning and need to enhance the capabilities of many client-side AI models with limited data, while efficiently utilizing a powerful pre-trained generative model.

Not ideal if you are looking for a plug-and-play solution for general image generation or if your primary concern is not federated learning with heterogeneous models.

federated-learning distributed-AI computer-vision-research generative-AI model-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

66

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/TsingZ0/FedKTL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.