gruai/koifish

A c++ framework on efficient training & fine-tuning LLMs

33
/ 100
Emerging

This project helps machine learning engineers and researchers efficiently train and fine-tune large language models (LLMs) using significantly less computational power and time. It takes raw text data and a desired LLM architecture as input, and outputs a trained or fine-tuned LLM ready for deployment. The target users are AI practitioners who need to develop and adapt language models without access to massive GPU clusters.

Use this if you are a machine learning engineer or researcher looking to train or fine-tune LLMs, like GPT-2 or QWen, on a single GPU with limited memory and want to achieve results quickly.

Not ideal if you are looking for a high-level Python library for general LLM usage or if you do not have experience with C++ development and GPU environments.

large-language-models model-training deep-learning AI-research GPU-optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

27

Forks

Language

C++

License

Apache-2.0

Last pushed

Mar 01, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/gruai/koifish"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.