singhsidhukuldeep/contextual-bandits

A comprehensive Python library implementing a variety of contextual and non-contextual multi-armed bandit algorithms—including LinUCB, Epsilon-Greedy, Upper Confidence Bound (UCB), Thompson Sampling, KernelUCB, NeuralLinearBandit, and DecisionTreeBandit—designed for reinforcement learning applications

27
/ 100
Experimental

This project helps anyone making sequential decisions where they need to choose the best option from a set of choices, especially when those choices have different outcomes based on various factors. It takes in data about different options and their performance in various situations, then provides a strategy for which option to choose next to maximize overall success. This is ideal for marketers optimizing ad campaigns, researchers selecting experiment conditions, or platform managers personalizing user experiences.

No commits in the last 6 months.

Use this if you need to continually make the best choice from several options, adapting your strategy as you gather more information and learn from past outcomes.

Not ideal if your decision-making problem doesn't involve uncertainty, sequential choices, or the need to balance exploring new options with exploiting known good ones.

A/B testing personalized recommendations ad campaign optimization dynamic pricing clinical trials
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

Python

License

GPL-3.0

Last pushed

Dec 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/singhsidhukuldeep/contextual-bandits"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.