UCLA-SEAL/DeepLearningTest

Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks? (FSE 2020)

36
/ 100
Emerging

This project investigates if neuron coverage, a metric similar to traditional code coverage, is useful for testing deep learning models. It takes a deep learning model and a test suite as input and examines how increasing neuron coverage impacts the suite's ability to detect flaws, produce realistic inputs, and avoid biased predictions. Deep learning engineers and researchers evaluating model robustness would use this to understand test effectiveness.

No commits in the last 6 months.

Use this if you are a deep learning engineer or researcher trying to determine effective strategies for testing deep neural networks and generating robust test suites.

Not ideal if you are looking for a tool to automatically generate test cases for traditional software or to improve the performance of a deep learning model itself.

deep-learning-testing model-validation neural-network-robustness software-engineering-for-ai adversarial-attack-detection
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

10

Forks

4

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Sep 23, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UCLA-SEAL/DeepLearningTest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.