shapash and AIX360

Both tools provide explainability and interpretability functionalities for machine learning models, making them **competitors** in the sense that a user would likely choose one over the other based on specific needs, given that Shapash emphasizes user-friendliness and integrates popular explainability methods, while AIX360 offers a broader collection of diverse algorithms for explainability and interpretability.

shapash
70
Verified
AIX360
59
Established
Maintenance 13/25
Adoption 11/25
Maturity 25/25
Community 21/25
Maintenance 0/25
Adoption 10/25
Maturity 25/25
Community 24/25
Stars: 3,150
Forks: 373
Downloads:
Commits (30d): 3
Language: Jupyter Notebook
License: Apache-2.0
Stars: 1,767
Forks: 328
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m

About shapash

MAIF/shapash

🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

This project helps data scientists and machine learning engineers understand why their predictive models make certain decisions. It takes a trained machine learning model and its input data, then generates easy-to-understand visualizations and reports that explain the model's behavior. The output helps both technical and non-technical stakeholders gain trust and insights into the model's predictions.

machine-learning-auditing model-explanation data-science-communication predictive-analytics AI-transparency

About AIX360

Trusted-AI/AIX360

Interpretability and explainability of data and machine learning models

This toolkit helps data scientists, machine learning engineers, and researchers understand why their AI models make specific predictions. It takes your existing tabular, text, image, or time-series data and machine learning models, and outputs explanations showing the factors influencing the model's decisions or highlighting important aspects of the data itself. This allows you to build trust in AI systems and debug potential issues.

Machine Learning Explainability AI Trustworthiness Model Debugging Data Understanding Responsible AI

Scores updated daily from GitHub, PyPI, and npm data. How scores work