leap-laboratories/PIZZA

An attribution library for LLMs

37
/ 100
Emerging

This project helps anyone working with Large Language Models (LLMs) understand exactly which words or phrases in their prompt are most influential in shaping the model's response. You provide your prompt and an LLM's generated output, and it shows you a detailed breakdown of how each input token contributed to the generated response. This is ideal for AI product managers, researchers, or anyone debugging LLM behavior.

No commits in the last 6 months.

Use this if you need to understand the 'why' behind an LLM's output by dissecting the impact of individual prompt elements.

Not ideal if you are looking for a tool to train LLMs or optimize their performance without needing to interpret their internal workings.

LLM-explanation AI-interpretability prompt-engineering AI-debugging natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

46

Forks

6

Language

Python

License

Last pushed

Sep 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/leap-laboratories/PIZZA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.