JBris/model-calibration-evaluation
Evaluating model calibration methods for sensitivity analysis, uncertainty analysis, optimisation, and Bayesian inference
This project helps researchers and engineers assess and improve the accuracy of their complex computer models. It takes your model's outputs and applies various calibration methods to evaluate how well the model parameters match real-world observations or experimental data. The output provides insights into the best calibration techniques for your specific modeling needs, benefiting anyone performing sensitivity analysis, uncertainty analysis, optimization, or Bayesian inference.
No commits in the last 6 months.
Use this if you need to determine the most effective way to calibrate your simulation models to ensure they accurately reflect real-world phenomena.
Not ideal if you are looking for a tool to build or run simulations, as this project focuses solely on evaluating calibration methods.
Stars
15
Forks
2
Language
Python
License
MIT
Last pushed
Mar 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JBris/model-calibration-evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EmuKit/emukit
A Python-based toolbox of various methods in decision making, uncertainty quantification and...
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
nielstron/quantulum3
Library for unit extraction - fork of quantulum for python3
IBM/UQ360
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you...
aamini/evidential-deep-learning
Learn fast, scalable, and calibrated measures of uncertainty using neural networks!