tgritsaev/gflownet-tlm

The source code for the paper "Optimizing Backward Policies in GFlowNets via Trajectory Likelihood Maximization" (ICLR 2025)

27
/ 100
Experimental

This project offers a new method to train Generative Flow Networks (GFlowNets), which are AI models that learn to create new designs, molecules, or bit sequences based on a desired reward. It improves how these models learn by optimizing a 'backward policy' which helps deconstruct and refine the generated items. Researchers and machine learning engineers working on generative AI for design or scientific discovery will find this useful.

No commits in the last 6 months.

Use this if you are a researcher or machine learning engineer developing or experimenting with GFlowNets and need to improve their convergence speed and ability to discover diverse high-reward solutions.

Not ideal if you are looking for a ready-to-use application for generative design rather than an experimental framework for GFlowNet algorithm development.

generative-AI machine-learning-research computational-chemistry algorithm-optimization AI-model-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

27

Forks

1

Language

Python

License

MIT

Last pushed

Mar 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tgritsaev/gflownet-tlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.