motokiomura/annealed-q-learning

[ICML 2025] Official code repository for "Gradual Transition from Bellman Optimality Operator to Bellman Operator in Online Reinforcement Learning"

21
/ 100
Experimental

This project offers a new way to train reinforcement learning (RL) agents for tasks with continuous actions, like robotics. It takes your existing actor-critic RL setup and modifies its learning process to achieve faster and more reliable training. This is for machine learning researchers and practitioners who develop and deploy RL agents for complex control problems.

No commits in the last 6 months.

Use this if you are working with continuous action reinforcement learning and want to accelerate training while improving the robustness and performance of your agents, especially in robotics or simulated control environments.

Not ideal if your primary focus is on discrete action spaces or if you are not already familiar with core reinforcement learning concepts and actor-critic methods.

reinforcement-learning robotics continuous-control machine-learning-research agent-training
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

MIT

Last pushed

Jun 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/motokiomura/annealed-q-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.