stair-lab/mlhp
Machine Learning from Human Preferences
This project helps authors, educators, or researchers publish comprehensive technical books, lecture slides, and course materials online or as PDFs. It takes structured content files (like Quarto documents with R and Python code) and produces a complete, formatted website, a printable PDF, and distinct slide decks. The primary users are those creating educational or reference materials in technical fields, especially involving machine learning or data science.
Use this if you need to create and publish high-quality, reproducible technical documentation, educational books, or lecture slides that integrate R and Python code.
Not ideal if you're looking for a simple blog platform, a general-purpose website builder, or a tool that doesn't involve heavy technical content creation.
Stars
30
Forks
6
Language
TeX
License
—
Category
Last pushed
Feb 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/stair-lab/mlhp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
princeton-nlp/SimPO
[NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward
uclaml/SPPO
The official implementation of Self-Play Preference Optimization (SPPO)
general-preference/general-preference-model
[ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment...
sail-sg/dice
Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
line/sacpo
[NeurIPS 2024] SACPO (Stepwise Alignment for Constrained Policy Optimization)