Huzaifa785/context-compressor

AI-powered text compression library for RAG systems and API calls. Reduce token usage by up to 50-60% while preserving semantic meaning with advanced compression strategies.

52
/ 100
Established

This tool helps AI application developers optimize how much text they send to large language models (LLMs) like ChatGPT or Claude. You feed it long documents or chat histories, and it intelligently shortens them, aiming to keep the most important information, especially what's relevant to a specific user question. The output is a significantly shorter version of the text, ready to be passed to an LLM, reducing processing costs and improving efficiency.

Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you are building AI applications and need to reduce the length of your input text to large language models to save costs and stay within their context limits, without losing the essential meaning.

Not ideal if you need to compress text for human reading where perfect grammatical flow and every detail are critical, as the goal here is optimized input for AI systems, not a polished summary for people.

AI-development LLM-optimization text-processing RAG-systems API-cost-reduction
Stale 6m
Maintenance 2 / 25
Adoption 10 / 25
Maturity 24 / 25
Community 16 / 25

How are scores calculated?

Stars

80

Forks

13

Language

Python

License

MIT

Last pushed

Aug 16, 2025

Commits (30d)

0

Dependencies

26

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Huzaifa785/context-compressor"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.