mims-harvard/TFC-pretraining

Self-supervised contrastive learning for time series via time-frequency consistency

49
/ 100
Emerging

This project helps scientists and engineers generalize knowledge from one set of time series data to another, even when data types or conditions vary. You provide diverse time series measurements, like sensor readings or medical signals, and it produces a 'pre-trained' model. This model can then be adapted more easily to new, smaller datasets for tasks such as identifying a seizure, recognizing a gesture, or detecting machinery faults.

519 stars. No commits in the last 6 months.

Use this if you need to train models for time series classification tasks, especially when you have limited labeled data for your specific problem but access to a lot of unlabeled time series data.

Not ideal if your problem does not involve time series data or if you have a very large, well-labeled dataset for your specific task and do not need to transfer knowledge from other time series.

predictive-maintenance medical-diagnostics activity-recognition signal-processing sensor-data-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

519

Forks

92

Language

Python

License

MIT

Last pushed

May 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mims-harvard/TFC-pretraining"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.