labmlai/annotated_deep_learning_paper_implementations

๐Ÿง‘โ€๐Ÿซ 60+ Implementations/tutorials of deep learning papers with side-by-side notes ๐Ÿ“; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ๐ŸŽฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐Ÿง 

56
/ 100
Established

This project helps deep learning researchers and practitioners understand complex deep learning algorithms. It provides clear, side-by-side explanations alongside PyTorch code implementations for various neural networks, including Transformers, GANs, and Reinforcement Learning models. You get working code and detailed notes, making it easier to grasp how these advanced systems function.

65,913 stars.

Use this if you are a deep learning researcher or practitioner who needs to understand and implement advanced neural network architectures, like those found in recent academic papers, with clear explanations.

Not ideal if you are looking for a high-level overview or an out-of-the-box solution without diving into the underlying code and mathematical details.

deep-learning-research neural-networks machine-learning-engineering algorithm-understanding AI-model-development
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

65,913

Forks

6,621

Language

Python

License

MIT

Last pushed

Jan 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/labmlai/annotated_deep_learning_paper_implementations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.