dobriban/Principles-of-AI-LLMs

Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.

36
/ 100
Emerging

This is a comprehensive collection of materials from the 'Principles of AI: LLMs' course at UPenn, designed for students and researchers. It provides lecture notes, presentations, and readings that cover the foundational concepts of Large Language Models, from their architectures to advanced topics like training, reasoning, and safety. The target audience includes graduate students, academics, and professionals looking to deepen their understanding of LLMs beyond basic usage.

No commits in the last 6 months.

Use this if you are a student, researcher, or AI practitioner seeking in-depth academic resources to understand the underlying principles and advanced concepts of Large Language Models.

Not ideal if you are looking for a hands-on coding tutorial or a high-level, non-technical introduction to using LLMs in practical applications.

AI Education Large Language Models Machine Learning Research Neural Network Architectures Natural Language Processing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

44

Forks

4

Language

License

CC0-1.0

Last pushed

Jun 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dobriban/Principles-of-AI-LLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.