Pengxin-Guo/FedSA-LoRA

Selective Aggregation for Low-Rank Adaptation in Federated Learning [ICLR 2025]

34
/ 100
Emerging

This project helps machine learning engineers efficiently fine-tune large language models (LLMs) across distributed datasets, like those found in different organizations or devices, while maintaining data privacy. It takes your LLM and diverse local datasets as input, and outputs a more accurate, globally adapted LLM. It's designed for machine learning researchers and practitioners working with federated learning setups.

No commits in the last 6 months.

Use this if you need to improve a large language model's performance by training on decentralized data without moving raw data to a central location.

Not ideal if you are fine-tuning an LLM on a single, centralized dataset or if you are not working with federated learning architectures.

federated-learning large-language-models distributed-ai-training ai-privacy model-adaptation
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

60

Forks

11

Language

Python

License

Category

llm-fine-tuning

Last pushed

Apr 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Pengxin-Guo/FedSA-LoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.