Striveworks/valor
Valor is a lightweight, numpy-based library designed for fast and seamless evaluation of machine learning models.
This tool helps data scientists and machine learning engineers quickly and consistently evaluate how well their AI models are performing. You provide your model's predictions and the actual correct answers (ground truths), and it calculates standard performance metrics like precision for classification, object detection, or semantic segmentation tasks. This allows you to understand and improve your machine learning pipelines efficiently.
Use this if you need a fast and reliable way to measure the accuracy and performance of your machine learning models in production or as part of a larger system.
Not ideal if you are looking for a high-level, no-code platform for model monitoring or if you primarily work outside of a Python development environment.
Stars
40
Forks
4
Language
Python
License
MIT
Category
Last pushed
Feb 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Striveworks/valor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Cloud-CV/EvalAI
:cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
fireindark707/Python-Schema-Matching
A python tool using XGboost and sentence-transformers to perform schema matching task on tables.
graphbookai/graphbook
Visual AI development framework for training and inference of ML models, scaling pipelines, and...
visual-layer/fastdup
fastdup is a powerful, free tool designed to rapidly generate valuable insights from image and...
github/CodeSearchNet
Datasets, tools, and benchmarks for representation learning of code.