thetawom/mabby
A multi-armed bandit (MAB) simulation library in Python
This library helps data scientists and machine learning engineers test different strategies for making sequential decisions where the best option isn't initially clear. You can input various decision-making algorithms and 'bandit arms' with different reward probabilities, and it outputs performance metrics like cumulative regret and how often the optimal choice was made. This allows you to compare and refine your strategies before applying them in real-world scenarios.
No commits in the last 6 months.
Use this if you need to simulate and evaluate multi-armed bandit algorithms to find the most effective strategy for problems like A/B testing, personalized recommendations, or dynamic pricing.
Not ideal if you are looking for a plug-and-play solution to directly deploy bandit algorithms in production without needing to simulate their performance first.
Stars
9
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/thetawom/mabby"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
WilliamLwj/PyXAB
PyXAB - A Python Library for X-Armed Bandit and Online Blackbox Optimization Algorithms
jekyllstein/Reinforcement-Learning-Sutton-Barto-Exercise-Solutions
Chapter notes and exercise solutions for Reinforcement Learning: An Introduction by Sutton and Barto
cfoh/Multi-Armed-Bandit-Example
Learning Multi-Armed Bandits by Examples. Currently covering MAB, UCB, Boltzmann Exploration,...
matteocasolari/reinforcement-learning-an-introduction-solutions
Implementations for solutions to programming exercises of Reinforcement Learning: An...
BY571/Upside-Down-Reinforcement-Learning
Upside-Down Reinforcement Learning (⅂ꓤ) implementation in PyTorch. Based on the paper published...