jparkerweb/llm-distillery
🍶 llm-distillery ⇢ use LLMs to run map-reduce summarization tasks on large documents until a target token size is met.
This tool helps you condense very long documents or texts into a shorter, more manageable version that fits within a specific size limit. It takes your extensive text, intelligently breaks it down into smaller parts, summarizes each part, and then combines these summaries, repeating the process until the text reaches your desired length. Anyone working with large amounts of text that needs to be processed or analyzed by an AI model, like researchers, content creators, or data analysts, would find this useful.
No commits in the last 6 months. Available on npm.
Use this if you need to shrink large text documents to a specific token count for processing with an AI model, without losing the main ideas.
Not ideal if you need a precise, verbatim extraction of information rather than a summarized version, or if you are not using AI models that have token limits.
Stars
12
Forks
1
Language
JavaScript
License
—
Category
Last pushed
Oct 15, 2025
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jparkerweb/llm-distillery"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Ahoo-Wang/fetcher
Fetcher is not just another HTTP client—it's a complete ecosystem designed for modern web...
eric-tramel/slop-guard
Slop Scoring to Stop Slop
arena-ai/structured-logprobs
OpenAI's Structured Outputs with Logprobs
567-labs/instructor-js
structured extraction for llms
martosaur/instructor_lite
Structured outputs for LLMs in Elixir