MeryylleA/lunariscodex
A high-performance PyTorch toolkit for pre-training modern, Llama-style language models. Based on nanoGPT with significant architectural enhancements.
This toolkit helps machine learning researchers and engineers efficiently train custom, high-performance large language models (LLMs) from scratch. You provide your own unique text datasets, and it outputs a ready-to-use Llama-style language model, complete with state-of-the-art architectural features. It's designed for those building specialized AI models for various applications.
Use this if you need to pre-train a powerful, Llama-style language model on your specific dataset, demanding high performance and stability for large-scale training jobs.
Not ideal if you're looking to fine-tune an existing model, perform basic text analysis, or don't have extensive computational resources for training.
Stars
13
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jan 30, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/MeryylleA/lunariscodex"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
uds-lsv/bert-stable-fine-tuning
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
VanekPetr/flan-t5-text-classifier
Fine-tuning of Flan-5T LLM for text classification 🤖 focuses on adapting a state-of-the-art...
kingTLE/literary-alpaca2
从词表到微调这就是你所需的一切
YuweiYin/HLT-MT
[IJCAI-ECAI 2022] HLT-MT: High-resource Language-specific Training for Multilingual Neural...
oatanas/containerized-transformer-finetuning
Containerized Transformer Fine-Tuning