goktug97/PEPG-ES

Python Implementation of Parameter-exploring Policy Gradients Evolution Strategy

31
/ 100
Emerging

This project helps machine learning practitioners optimize the parameters of neural networks using an evolution strategy called Parameter-exploring Policy Gradients (PEPG). You input a neural network's architecture and a reward function, and it outputs the best set of network parameters to achieve a high reward. This is primarily useful for those working on reinforcement learning or black-box optimization problems.

No commits in the last 6 months. Available on PyPI.

Use this if you need to optimize parameters for a neural network, especially in reinforcement learning contexts, and want an alternative to gradient-based methods like backpropagation.

Not ideal if you are working on supervised learning problems where backpropagation is generally faster and more reliable for training neural networks.

reinforcement-learning neural-network-training black-box-optimization evolution-strategy
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

17

Forks

Language

Python

License

MIT

Last pushed

Apr 02, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/goktug97/PEPG-ES"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.