shivakanthsujit/reducible-loss

Codebase for Prioritizing samples in Reinforcement Learning with Reducible Loss

27
/ 100
Experimental

This project helps machine learning engineers and researchers improve how their reinforcement learning models learn from past experiences. It takes in historical training data from a reinforcement learning agent and prioritizes which samples the model should revisit, leading to more robust and efficient learning. The end user is a practitioner developing or deploying reinforcement learning systems.

No commits in the last 6 months.

Use this if you are training an off-policy Q-value reinforcement learning algorithm and want to optimize how your model prioritizes samples from its experience replay buffer, especially when dealing with noisy or stochastic data.

Not ideal if you are working with on-policy reinforcement learning algorithms or if your primary concern is not about optimizing sample prioritization in off-policy learning.

reinforcement-learning machine-learning-engineering training-optimization deep-learning-research data-prioritization
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

12

Forks

3

Language

Python

License

Last pushed

Oct 10, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/shivakanthsujit/reducible-loss"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.