thetawom/mabby

A multi-armed bandit (MAB) simulation library in Python

29
/ 100
Experimental

This library helps data scientists and machine learning engineers test different strategies for making sequential decisions where the best option isn't initially clear. You can input various decision-making algorithms and 'bandit arms' with different reward probabilities, and it outputs performance metrics like cumulative regret and how often the optimal choice was made. This allows you to compare and refine your strategies before applying them in real-world scenarios.

No commits in the last 6 months.

Use this if you need to simulate and evaluate multi-armed bandit algorithms to find the most effective strategy for problems like A/B testing, personalized recommendations, or dynamic pricing.

Not ideal if you are looking for a plug-and-play solution to directly deploy bandit algorithms in production without needing to simulate their performance first.

A/B testing experimental design reinforcement learning simulation decision-making analytics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Jul 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/thetawom/mabby"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.