mad-lab-fau/tpcp
Pipeline and Dataset helpers for complex algorithm evaluation.
When evaluating complex algorithms, especially those involving machine learning on non-standard data, researchers often struggle with custom implementation for data handling, algorithm pipelines, and evaluation. This project provides a flexible framework using object-oriented datasets and pipelines. It takes in raw, multi-modal sensor data and metadata to output robust algorithm performance evaluations. Researchers, data scientists, and engineers working with "complex" algorithms and non-tabular data benefit from this tool.
Available on PyPI.
Use this if you are developing or evaluating algorithms with non-standard data types, complex data structures, or custom cross-validation logic that existing machine learning frameworks don't easily support.
Not ideal if your algorithms and data fit neatly into standard machine learning frameworks like scikit-learn or PyTorch, which offer their own comprehensive evaluation tools.
Stars
19
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mad-lab-fau/tpcp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
treeverse/dvc
🦉 Data Versioning and ML Experiments
runpod/runpod-python
🐍 | Python library for RunPod API and serverless worker SDK.
microsoft/vscode-jupyter
VS Code Jupyter extension
4paradigm/OpenMLDB
OpenMLDB is an open-source machine learning database that provides a feature platform computing...
uber/petastorm
Petastorm library enables single machine or distributed training and evaluation of deep learning...