lilucse/SparseNetwork4DRL

[ICML 2025 oral] Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning

19
/ 100
Experimental

This is a codebase for deep reinforcement learning researchers and practitioners. It takes a reinforcement learning environment configuration and outputs a trained agent that can perform actions within that environment more efficiently, especially for complex tasks. It's for those working on training AI agents for tasks like robotics or control systems.

No commits in the last 6 months.

Use this if you are a researcher or advanced practitioner working with deep reinforcement learning and want to explore how network sparsity can improve training efficiency and scalability for your agents.

Not ideal if you are new to deep reinforcement learning or looking for a high-level library to quickly implement standard reinforcement learning algorithms without diving into network architecture details.

deep-reinforcement-learning AI-agent-training robotics-control neural-network-optimization AI-research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 7 / 25
Community 3 / 25

How are scores calculated?

Stars

41

Forks

1

Language

Python

License

Last pushed

Jun 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lilucse/SparseNetwork4DRL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.