dmlc/xgboost

Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

85
/ 100
Verified

XGBoost helps data scientists and machine learning engineers quickly build highly accurate predictive models for classification, regression, and ranking tasks. It takes structured datasets (like spreadsheets or database tables) and outputs a powerful model capable of making predictions. This tool is ideal for professionals who need to develop robust and efficient predictive analytics solutions.

28,121 stars. Used by 122 other packages. Actively maintained with 38 commits in the last 30 days. Available on PyPI.

Use this if you need to build fast, accurate predictive models from large, structured datasets, especially in contexts where performance and scalability are critical.

Not ideal if your primary task involves processing unstructured data like images, audio, or raw text, or if you need highly interpretable models rather than maximum predictive power.

predictive-modeling machine-learning-engineering data-science business-forecasting risk-assessment
Maintenance 20 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

28,121

Forks

8,847

Language

C++

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

38

Dependencies

3

Reverse dependents

122

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dmlc/xgboost"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.