liyucheng09/Selective_Context

Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.

40
/ 100
Emerging

This tool helps you provide more information to large language models (like ChatGPT) when you're working with long documents or extended conversations. It takes your full text or chat history and distills it into a shorter, more relevant version. The result is that the LLM can process twice as much content without losing important details. Anyone who uses LLMs for tasks involving lengthy text, such as researchers, content creators, or customer support specialists, would benefit.

410 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need your large language model to analyze or respond to very long documents, articles, or conversation transcripts, but are hitting its input length limits.

Not ideal if your interactions with LLMs are typically short and concise, or if you need to retain every single word of your input for legal or compliance reasons.

long-document-analysis conversation-summarization AI-assistant-workflow text-processing information-extraction
No License Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 17 / 25
Community 13 / 25

How are scores calculated?

Stars

410

Forks

25

Language

Python

License

Last pushed

Feb 12, 2024

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/liyucheng09/Selective_Context"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.