yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
This collection of research notes helps machine learning practitioners and researchers understand how to build AI models that perform reliably even when the data they encounter in the real world is different from the data they were trained on. It compiles cutting-edge research and provides insights into techniques like domain generalization, causal inference, and robustness. The primary users are ML researchers, PhD students, and data scientists working on advanced AI applications where models need to be robust to unexpected data shifts.
1,238 stars. No commits in the last 6 months.
Use this if you are building or researching AI models and need them to maintain performance and fairness when deployed in dynamic, real-world environments with unpredictable data distributions.
Not ideal if you are looking for ready-to-use code libraries or tutorials for basic machine learning tasks.
Stars
1,238
Forks
103
Language
—
License
MIT
Category
Last pushed
Dec 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/yfzhang114/Generalization-Causality"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
Affirm/splinator
Splinator: probabilistic calibration with regression splines
hollance/reliability-diagrams
Reliability diagrams visualize whether a classifier model needs calibration