StupidTrees/SplitLLM
Split Learning Simulation Framework for LLMs
This framework helps machine learning researchers and privacy engineers simulate how large language models behave when their training is split across different systems, like a client device and a server. It takes various LLM architectures and datasets as input, and outputs insights into the model's performance and, crucially, its vulnerability to privacy attacks. You would use this if you're working on securing LLMs in distributed or federated learning environments.
No commits in the last 6 months.
Use this if you need to evaluate the security and privacy risks of fine-tuning large language models using split learning architectures.
Not ideal if you're looking for a production-ready solution for deploying split learning, as this is a simulation and research framework.
Stars
38
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/StupidTrees/SplitLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase