Prismadic/magnet

the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly

34
/ 100
Emerging

This toolkit helps machine learning engineers and researchers quickly adapt cutting-edge large language models (LLMs) to specific tasks and data, even with limited hardware. You provide your specialized text data and the desired LLM, and it outputs a fine-tuned, more accurate language model tailored to your needs. This is perfect for teams needing to deploy custom LLMs without massive cloud infrastructure.

No commits in the last 6 months.

Use this if you need to fine-tune state-of-the-art large language models efficiently across various hardware setups, from personal computers to distributed clusters, for specialized applications.

Not ideal if you are looking for a pre-trained, off-the-shelf language model that requires no customization or specific domain adaptation.

LLM fine-tuning distributed AI natural language processing ML model deployment AI research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

32

Forks

4

Language

Python

License

MIT

Last pushed

Oct 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/Prismadic/magnet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.