asif-hanif/baple
[MICCAI 2024] Official code repository of paper titled "BAPLe: Backdoor Attacks on Medical Foundation Models using Prompt Learning" accepted in MICCAI 2024 conference.
This project helps medical professionals, researchers, and developers assess the security of medical AI models. It introduces a method to embed a 'backdoor' into medical foundation models during the prompt learning phase. The input is a medical image, and the output is a classification result that can be manipulated under specific, imperceptible trigger conditions. It is designed for those involved in developing, deploying, or auditing AI systems in healthcare.
No commits in the last 6 months.
Use this if you are a medical AI researcher, developer, or auditor concerned with evaluating the robustness and security of medical foundation models against stealthy adversarial attacks.
Not ideal if you are looking to secure a deployed medical AI system without understanding the underlying vulnerabilities, or if you need a general-purpose cybersecurity tool outside of medical imaging AI.
Stars
56
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/asif-hanif/baple"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...