IngestAI/deepmark

Deepmark AI enables a unique testing environment for language models (LLM) assessment on task-specific metrics and on your own data so your GenAI-powered solution has predictable and reliable performance.

29
/ 100
Experimental

This tool helps Generative AI application builders ensure their AI solutions perform reliably and predictably. You input your own data and specify a Large Language Model (LLM) along with task-specific criteria like question answering accuracy or cost. The tool then provides assessment results, showing which LLM best meets your application's needs. It's designed for developers building GenAI-powered applications.

104 stars. No commits in the last 6 months.

Use this if you are a developer building Generative AI applications and need to rigorously test and compare different Large Language Models on your own data to ensure predictable, reliable, and cost-effective performance.

Not ideal if you are an end-user of a GenAI application and not involved in the development or model selection process.

Generative AI development LLM evaluation AI application testing Model benchmarking AI performance assessment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

104

Forks

2

Language

PHP

License

AGPL-3.0

Last pushed

Nov 24, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/IngestAI/deepmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.