ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness distributions.
This tool helps machine learning researchers and practitioners evaluate the reliability of neural networks. You provide your neural network models and datasets, and it outputs robustness distributions, which show how stable your models are against subtle input changes. This helps you understand how well your models will perform in real-world scenarios where data might be slightly different.
Use this if you need to systematically measure and understand the robustness of your neural networks against adversarial attacks or small input perturbations.
Not ideal if you are looking for a tool to train neural networks or to perform general data analysis outside of robustness verification.
Stars
49
Forks
8
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ADA-research/VERONA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
hendrycks/robustness
Corruption and Perturbation Robustness (ICLR 2019)