microsoft/LMChallenge
A library & tools to evaluate predictive language models.
This tool helps researchers and engineers compare the performance of different language models consistently. You input your language model (after a small setup to wrap its API) and a test text corpus. The tool then outputs various statistics like prediction accuracy, completion rates, and entropy, allowing for a fair, apples-to-apples comparison across models with different architectures or vocabularies. It's ideal for those working on natural language processing.
No commits in the last 6 months. Available on PyPI.
Use this if you need a standardized way to compare how well different predictive language models perform on tasks like next-word prediction or text completion, especially when those models vary significantly in their underlying design or output format.
Not ideal if you need to evaluate models that are not 'forward contextual' (i.e., they don't predict words based only on preceding text) or if you are not comfortable with a little technical setup to integrate your model.
Stars
65
Forks
12
Language
Python
License
—
Category
Last pushed
Aug 09, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/microsoft/LMChallenge"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa