UCLA-SEAL/DeepLearningTest
Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks? (FSE 2020)
This project investigates if neuron coverage, a metric similar to traditional code coverage, is useful for testing deep learning models. It takes a deep learning model and a test suite as input and examines how increasing neuron coverage impacts the suite's ability to detect flaws, produce realistic inputs, and avoid biased predictions. Deep learning engineers and researchers evaluating model robustness would use this to understand test effectiveness.
No commits in the last 6 months.
Use this if you are a deep learning engineer or researcher trying to determine effective strategies for testing deep neural networks and generating robust test suites.
Not ideal if you are looking for a tool to automatically generate test cases for traditional software or to improve the performance of a deep learning model itself.
Stars
10
Forks
4
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Sep 23, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UCLA-SEAL/DeepLearningTest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
tflearn/tflearn
Deep learning library featuring a higher-level API for TensorFlow.
DingKe/nn_playground
Experimental keras implementation of novel neural network structures
DataForScience/DeepLearning
Deep Learning From Scratch
OVVO-Financial/NNS
Nonlinear Nonparametric Statistics
remicres/otbtf
Deep learning with otb (mirror of https://forgemia.inra.fr/orfeo-toolbox/otbtf)