xiaolin-cs/BackTime

BackTime: Backdoor Attacks on Multivariate Time Series Forecasting

21
/ 100
Experimental

This project helps researchers and security analysts understand and demonstrate how subtle, hidden 'backdoors' can be inserted into multivariate time series forecasting models. It takes in real-world time series data (like traffic, weather, or electricity use) and outputs a poisoned dataset and an attacked forecasting model that behaves normally until a specific, secret trigger is present in the input data, causing it to make manipulated predictions. This tool is for security researchers, data scientists specializing in time series analysis, and anyone studying the vulnerabilities of AI/ML systems.

No commits in the last 6 months.

Use this if you need to research or demonstrate how to create stealthy backdoor attacks on complex time series forecasting models, or to test the robustness of your own models against such attacks.

Not ideal if you are looking for a tool to defend against or detect backdoor attacks; this framework is specifically designed to create them.

AI-security time-series-forecasting model-vulnerability data-poisoning adversarial-AI
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

31

Forks

1

Language

Python

License

Last pushed

Apr 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xiaolin-cs/BackTime"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.