Cinofix/sponge_poisoning_energy_latency_attack

Source code for the Energy-Latency Attacks via Sponge Poisoning paper.

24
/ 100
Experimental

This project helps evaluate the vulnerability of deep neural networks (DNNs) to 'sponge poisoning' attacks, which aim to increase a model's computational resource consumption (energy, latency) without degrading its accuracy. It takes a trained DNN model and a dataset as input and produces an 'attacked' version of the model, along with statistics and visualizations showing the increased energy consumption and latency. This tool is for AI/ML researchers, security analysts, and engineers evaluating the robustness and deployment costs of DNNs.

No commits in the last 6 months.

Use this if you need to understand how a deep learning model's resource consumption can be maliciously inflated while maintaining its predictive performance.

Not ideal if you are looking for methods to improve model efficiency or reduce inference costs through benign optimization techniques.

AI-security deep-learning-robustness model-profiling neural-network-attacks edge-AI-deployment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

15

Forks

2

Language

Python

License

Last pushed

Mar 14, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Cinofix/sponge_poisoning_energy_latency_attack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.