PhoebusSi/Alpaca-CoT

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

45
/ 100
Emerging

This platform helps researchers quickly customize large language models for specific tasks. You provide instruction-tuning datasets (like Alpaca-CoT) and a base large language model, then select parameter-efficient training methods. The output is a fine-tuned model tailored to your specialized instructions. This is designed for AI researchers or data scientists who want to experiment with or deploy custom LLMs.

2,801 stars. No commits in the last 6 months.

Use this if you need to adapt an existing large language model to perform better on a very specific set of instructions or data, without having to rebuild the model from scratch.

Not ideal if you're looking for an off-the-shelf solution for general language tasks or if you don't have access to instruction-tuning datasets.

AI-research natural-language-processing large-language-models model-customization instruction-tuning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

2,801

Forks

251

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Dec 12, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PhoebusSi/Alpaca-CoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.