xiaolin-cs/BackTime
BackTime: Backdoor Attacks on Multivariate Time Series Forecasting
This project helps researchers and security analysts understand and demonstrate how subtle, hidden 'backdoors' can be inserted into multivariate time series forecasting models. It takes in real-world time series data (like traffic, weather, or electricity use) and outputs a poisoned dataset and an attacked forecasting model that behaves normally until a specific, secret trigger is present in the input data, causing it to make manipulated predictions. This tool is for security researchers, data scientists specializing in time series analysis, and anyone studying the vulnerabilities of AI/ML systems.
No commits in the last 6 months.
Use this if you need to research or demonstrate how to create stealthy backdoor attacks on complex time series forecasting models, or to test the robustness of your own models against such attacks.
Not ideal if you are looking for a tool to defend against or detect backdoor attacks; this framework is specifically designed to create them.
Stars
31
Forks
1
Language
Python
License
—
Category
Last pushed
Apr 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xiaolin-cs/BackTime"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...