allenai/tango
Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.
Tango helps researchers organize machine learning experiments, eliminating messy directories and versioning spreadsheets. It takes your experimental steps and configurations, then caches and reuses results to speed up your research. This tool is for scientists, machine learning engineers, and anyone conducting iterative computational experiments.
568 stars. No commits in the last 6 months.
Use this if you are a researcher who frequently runs and re-runs computational experiments, making small changes and needing to track results efficiently without recomputing everything each time.
Not ideal if you need a production workflow engine for deploying established models or orchestrating long-running data pipelines in a stable environment.
Stars
568
Forks
53
Language
Python
License
Apache-2.0
Category
Last pushed
May 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/allenai/tango"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.