syssec-utd/provninja
Evading Provenance-Based ML Detectors with Adversarial System Actions
This project helps cybersecurity researchers understand how to bypass machine learning models designed to detect intrusions. It takes provenance data – detailed logs of system activities – and generates "gadget chains" or adversarial examples. The output helps security researchers and red team professionals identify vulnerabilities in existing intrusion detection systems.
No commits in the last 6 months.
Use this if you are a cybersecurity researcher or red team professional looking to evaluate the robustness of provenance-based intrusion detection systems against adversarial attacks.
Not ideal if you are looking for an out-of-the-box intrusion detection solution or a general-purpose security tool for production environments.
Stars
35
Forks
11
Language
Python
License
BSD-3-Clause
Category
Last pushed
Aug 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/syssec-utd/provninja"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research