lin-tan/eagle

For our ICSE22 paper "EAGLE: Creating Equivalent Graphs to Test Deep Learning Libraries" by Jiannan Wang, Thibaud Lutellier, Shangshu Qian, Hung Viet Pham, and Lin Tan.

32
/ 100
Emerging

EAGLE helps deep learning library developers find inconsistencies and bugs in how their libraries process numerical computations. It takes a deep learning library (like TensorFlow or PyTorch) and a version number as input. It then applies a series of transformation rules to numerical operations within the library and reports if these transformations lead to unexpected changes in output, indicating a potential bug. This tool is designed for quality assurance engineers and developers maintaining deep learning frameworks.

No commits in the last 6 months.

Use this if you are a deep learning library developer or QA engineer looking for an automated way to test the numerical stability and correctness of your framework's operations across different versions and configurations.

Not ideal if you are an end-user building or deploying deep learning models and are not involved in the development or maintenance of the underlying deep learning libraries themselves.

deep-learning-library-testing software-quality-assurance numerical-stability framework-development bug-detection
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Jupyter Notebook

License

Last pushed

Aug 16, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lin-tan/eagle"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.