rbosh/ml-adp

Approximate dynamic programming for stochastic optimal control in Pytorch

29
/ 100
Experimental

This tool helps quantitative analysts and researchers build and solve complex optimization problems where decisions are made sequentially over time, and outcomes are uncertain. You define the rules for how a system changes, the costs incurred, and potential random effects, and it helps you find the best sequence of actions to minimize total cost. The output is a set of optimal control policies, often represented by neural networks, that guide decision-making at each step.

No commits in the last 6 months.

Use this if you need to solve stochastic optimal control problems in a PyTorch environment, especially when using neural networks for control optimization or approximating value functions.

Not ideal if you are looking for a pre-built, ready-to-use solution for a specific control problem without needing to define custom neural network architectures or interact with PyTorch.

quantitative-finance operations-research stochastic-control reinforcement-learning dynamic-programming
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

24

Forks

2

Language

Python

License

MIT

Last pushed

Aug 26, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rbosh/ml-adp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.