neu-autonomy/nfl_veripy

Formal Verification of Neural Feedback Loops (NFLs)

52
/ 100
Established

This project helps control systems engineers and researchers assess the safety and reliability of autonomous systems governed by AI. It takes a description of a system's physical behavior and its neural network controller, then calculates the range of possible future states the system could reach, or identifies which initial states could lead to unsafe situations. This allows engineers to formally verify whether a system will operate within safe boundaries.

No commits in the last 6 months. Available on PyPI.

Use this if you need to rigorously confirm that a control system with a neural network will remain safe and stable under various operating conditions.

Not ideal if you are looking for a tool to train neural networks or design control policies, as this focuses solely on verification.

autonomous-systems control-systems safety-verification robotics dynamical-systems
Stale 6m
Maintenance 0 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

83

Forks

16

Language

Python

License

MIT

Last pushed

Sep 12, 2024

Commits (30d)

0

Dependencies

20

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/neu-autonomy/nfl_veripy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.