stanfordmlgroup/ngboost
Natural Gradient Boosting for Probabilistic Prediction
This library helps data scientists and machine learning engineers create models that predict not just a single outcome, but a full range of possible outcomes and their likelihoods. You provide structured data with features and a target variable, and it outputs a model that offers a probability distribution for future predictions, rather than just a point estimate. This is useful for anyone needing to understand the uncertainty in their predictions.
1,841 stars. Used by 2 other packages. Actively maintained with 1 commit in the last 30 days. Available on PyPI.
Use this if you need to quantify the uncertainty of your predictions and understand the full probability distribution of potential outcomes, not just a single most likely value.
Not ideal if you only need a single best-guess prediction and are not concerned with the underlying uncertainty or probability distribution.
Stars
1,841
Forks
245
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Feb 25, 2026
Commits (30d)
1
Dependencies
7
Reverse dependents
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/stanfordmlgroup/ngboost"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
dmlc/xgboost
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python,...
catboost/catboost
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for...
lightgbm-org/LightGBM
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework...
fabsig/GPBoost
Tree-Boosting, Gaussian Processes, and Mixed-Effects Models
serengil/chefboost
A Lightweight Decision Tree Framework supporting regular algorithms: ID3, C4.5, CART, CHAID and...