Michaelgathara/GPT
FineWeb-EDU trained Billion+ Parameter Model
This project helps machine learning engineers and researchers build and train their own custom large language models (LLMs) from the ground up. You provide raw text data, and it outputs a fully trained GPT-style model capable of generating coherent text based on new prompts. It's designed for those who want to deeply understand and customize the LLM training process.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who wants to build, train, and customize a GPT-style language model from scratch, using your own datasets and fine-tuning every aspect of the architecture and training process.
Not ideal if you simply want to use an existing pre-trained LLM or fine-tune an existing model for a specific task without needing to understand or implement the underlying architecture and training mechanics.
Stars
14
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jun 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Michaelgathara/GPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Nixtla/nixtla
TimeGPT-1: production ready pre-trained Time Series Foundation Model for forecasting and...
andrewdalpino/NoPE-GPT
A GPT-style small language model (SLM) with no positional embeddings (NoPE).
sigdelsanjog/gptmed
pip install gptmed
akanyaani/gpt-2-tensorflow2.0
OpenAI GPT2 pre-training and sequence prediction implementation in Tensorflow 2.0
samkamau81/FinGPT_
FinGPT is an AI language model designed to understand and generate financial content. Built upon...