Laz4rz/GPT-2

Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish

26
/ 100
Experimental

This project offers a self-contained environment to understand and experiment with the core mechanics of GPT-2, a foundational large language model. It takes raw text data, processes it, and allows you to train a model that can generate new, coherent text based on the patterns it learns. AI researchers and machine learning engineers looking to grasp the underlying architecture and training processes of transformer models would find this valuable.

171 stars. No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer who wants to deep dive into the implementation details and training nuances of a GPT-2-like model from scratch.

Not ideal if you're looking for a production-ready GPT-2 implementation, a high-performance training solution, or a tool for general text generation without needing to understand the internals.

Large Language Models Transformer Architecture Model Training Natural Language Processing Deep Learning Research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

171

Forks

6

Language

Jupyter Notebook

License

Last pushed

Jul 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Laz4rz/GPT-2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.