root-signals/scorable-sdk
Scorable SDK
This helps developers integrate measurement and control into their Large Language Model (LLM) automations. It takes the output of an LLM and provides tools to evaluate its performance and ensure it aligns with desired outcomes. This is for software engineers and machine learning practitioners building and deploying LLM-powered applications.
Use this if you are building an application with LLMs and need to monitor, evaluate, and control their output to ensure reliability and effectiveness.
Not ideal if you are looking for an LLM itself or a simple wrapper to call an LLM, rather than tools for measuring and controlling its behavior.
Stars
13
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/root-signals/scorable-sdk"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents