WENGSYX/LMTuner
LMTuner: Make the LLM Better for Everyone
This tool helps non-programmers easily customize large language models (LLMs) to perform specific tasks. You input your existing text data or conversations, and it trains an LLM to generate responses or content tailored to your needs, even on consumer-grade hardware. It's designed for anyone who wants to adapt an LLM for their unique use case without needing coding expertise.
No commits in the last 6 months. Available on PyPI.
Use this if you want to fine-tune an existing large language model with your own domain-specific text data, without writing any code, and potentially on less powerful GPUs.
Not ideal if you need to build a large language model from scratch or require highly specialized, low-level control over the training process.
Stars
38
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 21, 2023
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/WENGSYX/LMTuner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.