NYU-MLDA/ABC-RL
This is work-in-progress (WIP) refactored implementation of "Retreival-guided Reinforcement Learning for Boolean Circuit Minimization" work published in ICLR 2024.
This project helps digital circuit designers and hardware engineers reduce the complexity of boolean circuits. It takes in circuit designs as AIG graphs and outputs optimized circuit recipes, aiming to minimize area, delay, and power consumption. The target user is someone working on hardware optimization and digital design.
No commits in the last 6 months.
Use this if you need to automatically discover highly optimized boolean circuit recipes that outperform traditional synthesis methods like 'resyn2'.
Not ideal if you need a quick solution or lack significant computational resources and storage, as training can take weeks and generate hundreds of gigabytes of data.
Stars
8
Forks
2
Language
Verilog
License
GPL-3.0
Category
Last pushed
May 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/NYU-MLDA/ABC-RL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DLR-RM/stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
google-deepmind/dm_control
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning...
Denys88/rl_games
RL implementations
pytorch/rl
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
yandexdataschool/Practical_RL
A course in reinforcement learning in the wild