whylabs/langkit

🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀

43
/ 100
Emerging

This toolkit helps data scientists and ML engineers proactively monitor the behavior of their language models, including LLMs, in production. It takes the text prompts and responses from your model and extracts various signals like text quality, relevance, sentiment, and potential security risks. The output is a set of metrics that provide deep insights into how your language model is performing and interacting with users.

976 stars. No commits in the last 6 months.

Use this if you need to understand, track, and ensure the safety and quality of your large language models once they are live and interacting with real users.

Not ideal if you are looking for a tool to train or fine-tune your language models, as this is purely for observability and monitoring of deployed models.

LLM-observability AI-safety production-monitoring model-governance NLP-metrics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

976

Forks

70

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Nov 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/whylabs/langkit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.