Pengxin-Guo/FedSA-LoRA
Selective Aggregation for Low-Rank Adaptation in Federated Learning [ICLR 2025]
This project helps machine learning engineers efficiently fine-tune large language models (LLMs) across distributed datasets, like those found in different organizations or devices, while maintaining data privacy. It takes your LLM and diverse local datasets as input, and outputs a more accurate, globally adapted LLM. It's designed for machine learning researchers and practitioners working with federated learning setups.
No commits in the last 6 months.
Use this if you need to improve a large language model's performance by training on decentralized data without moving raw data to a central location.
Not ideal if you are fine-tuning an LLM on a single, centralized dataset or if you are not working with federated learning architectures.
Stars
60
Forks
11
Language
Python
License
—
Category
Last pushed
Apr 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Pengxin-Guo/FedSA-LoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.