cfoh/Multi-Armed-Bandit-Example

Learning Multi-Armed Bandits by Examples. Currently covering MAB, UCB, Boltzmann Exploration, Thompson Sampling, Contextual MAB, LinUCB, Deep MAB.

44
/ 100
Emerging

This project helps marketers and product managers optimize decisions in real-time by showing the best option among several choices. For example, you feed in different ad creatives, and it tells you which one customers click most. This helps you quickly learn and adapt your strategy to maximize positive outcomes like ad clicks or product purchases.

Use this if you need to continuously learn which of several options performs best (e.g., ad variations, product recommendations, or website layouts) and adjust your strategy on the fly.

Not ideal if your decisions don't have immediate, measurable feedback, or if you need a static, one-time recommendation rather than continuous learning.

digital-marketing ad-optimization A/B-testing personalization product-recommendations
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

45

Forks

7

Language

Python

License

MIT

Last pushed

Nov 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cfoh/Multi-Armed-Bandit-Example"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.