StupidTrees/SplitLLM

Split Learning Simulation Framework for LLMs

37
/ 100
Emerging

This framework helps machine learning researchers and privacy engineers simulate how large language models behave when their training is split across different systems, like a client device and a server. It takes various LLM architectures and datasets as input, and outputs insights into the model's performance and, crucially, its vulnerability to privacy attacks. You would use this if you're working on securing LLMs in distributed or federated learning environments.

No commits in the last 6 months.

Use this if you need to evaluate the security and privacy risks of fine-tuning large language models using split learning architectures.

Not ideal if you're looking for a production-ready solution for deploying split learning, as this is a simulation and research framework.

LLM security privacy research federated learning data privacy distributed AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

38

Forks

6

Language

Python

License

Apache-2.0

Last pushed

Sep 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/StupidTrees/SplitLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.