neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
This project helps control systems engineers and researchers assess the safety and reliability of autonomous systems governed by AI. It takes a description of a system's physical behavior and its neural network controller, then calculates the range of possible future states the system could reach, or identifies which initial states could lead to unsafe situations. This allows engineers to formally verify whether a system will operate within safe boundaries.
No commits in the last 6 months. Available on PyPI.
Use this if you need to rigorously confirm that a control system with a neural network will remain safe and stable under various operating conditions.
Not ideal if you are looking for a tool to train neural networks or design control policies, as this focuses solely on verification.
Stars
83
Forks
16
Language
Python
License
MIT
Category
Last pushed
Sep 12, 2024
Commits (30d)
0
Dependencies
20
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/neu-autonomy/nfl_veripy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...
hendrycks/robustness
Corruption and Perturbation Robustness (ICLR 2019)