cambridge-mlg/acp

Implementation for the paper "Approximating full conformal prediction at scale via influence functions"

28
/ 100
Experimental

This tool helps data scientists and machine learning engineers get reliable predictions from their models, especially with large datasets. It takes any differentiable machine learning model and a dataset, then outputs a 'prediction set' for each input that is guaranteed to contain the correct answer with a probability you define. This means you can be confident about the range of possible outcomes from your model.

No commits in the last 6 months.

Use this if you need to provide a set of predictions from your machine learning model that is guaranteed to include the true label with a high, specified probability, especially when working with extensive datasets.

Not ideal if you only need a single best prediction from your model and do not require quantifiable confidence guarantees or prediction sets.

predictive-modeling model-evaluation risk-assessment machine-learning-operations data-science
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cambridge-mlg/acp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.