rbosh/ml-adp
Approximate dynamic programming for stochastic optimal control in Pytorch
This tool helps quantitative analysts and researchers build and solve complex optimization problems where decisions are made sequentially over time, and outcomes are uncertain. You define the rules for how a system changes, the costs incurred, and potential random effects, and it helps you find the best sequence of actions to minimize total cost. The output is a set of optimal control policies, often represented by neural networks, that guide decision-making at each step.
No commits in the last 6 months.
Use this if you need to solve stochastic optimal control problems in a PyTorch environment, especially when using neural networks for control optimization or approximating value functions.
Not ideal if you are looking for a pre-built, ready-to-use solution for a specific control problem without needing to define custom neural network architectures or interact with PyTorch.
Stars
24
Forks
2
Language
Python
License
MIT
Category
Last pushed
Aug 26, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rbosh/ml-adp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DLR-RM/stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
google-deepmind/dm_control
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning...
Denys88/rl_games
RL implementations
pytorch/rl
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
yandexdataschool/Practical_RL
A course in reinforcement learning in the wild