gruai/koifish
A c++ framework on efficient training & fine-tuning LLMs
This project helps machine learning engineers and researchers efficiently train and fine-tune large language models (LLMs) using significantly less computational power and time. It takes raw text data and a desired LLM architecture as input, and outputs a trained or fine-tuned LLM ready for deployment. The target users are AI practitioners who need to develop and adapt language models without access to massive GPU clusters.
Use this if you are a machine learning engineer or researcher looking to train or fine-tune LLMs, like GPT-2 or QWen, on a single GPU with limited memory and want to achieve results quickly.
Not ideal if you are looking for a high-level Python library for general LLM usage or if you do not have experience with C++ development and GPU environments.
Stars
27
Forks
—
Language
C++
License
Apache-2.0
Category
Last pushed
Mar 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/gruai/koifish"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence...
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets...
YalaLab/pillar-finetune
Finetuning framework for Pillar medical imaging models.
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML...