Raideeen/Explainable-AI-malware-detection

Malware detection with added explanability of AI through saliency map on Android APK using PyTorch and Androguard.

35
/ 100
Emerging

This project helps security analysts and researchers understand why an AI model flags certain Android applications (APKs) as malicious. You provide a collection of Android APK files, and the tool uses a machine learning model to classify them. Critically, it then generates 'saliency maps'—visual explanations that highlight which parts of the APK's structure led to the detection, making the AI's decision process transparent.

No commits in the last 6 months.

Use this if you need to not only detect malware in Android APKs but also gain insight into the specific characteristics or patterns that the AI model identifies as indicators of malicious behavior.

Not ideal if you are looking for a plug-and-play solution for immediate, large-scale production malware detection without the need for detailed explainability or custom model training.

Android-security malware-analysis application-security cybersecurity-research explainable-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Python

License

MIT

Last pushed

Feb 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Raideeen/Explainable-AI-malware-detection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.