huggingface/hf_benchmarks
A starter kit for evaluating benchmarks on the 🤗 Hub
This toolkit helps machine learning engineers and researchers evaluate how well different language models perform on specific natural language processing tasks. You provide your model's outputs for a given benchmark, and it processes them to generate standardized metrics and comparisons against other models. The primary users are those who train or fine-tune NLP models and need to rigorously assess their performance.
No commits in the last 6 months.
Use this if you need to submit your natural language processing model's results to a community benchmark and see how it ranks against others.
Not ideal if you are looking for a tool to train or fine-tune models, or if your tasks are outside of natural language processing.
Stars
16
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/huggingface/hf_benchmarks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
opentensor/bittensor
Internet-scale Neural Networks
trailofbits/fickling
A Python pickling decompiler and static analyzer
benchopt/benchopt
A framework for reproducible, comparable benchmarks
BiomedSciAI/fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code...
mosaicml/streaming
A Data Streaming Library for Efficient Neural Network Training