avilum/minrlm

Token-efficient Recursive Language Model. 3.6x fewer tokens than vanilla LLMs. Data never enters the prompt.

50
/ 100
Established

This helps people who work with large text documents, log files, or datasets by using AI to analyze them more efficiently and accurately. You provide a large amount of information and a specific question, and it gives you a precise answer, even if the data is massive. Data analysts, operations engineers, or researchers dealing with extensive text-based information would find this useful.

Used by 1 other package. Available on PyPI.

Use this if you need to analyze large documents, logs, or datasets with an AI and want to save on cost and ensure accuracy, especially when the relevant information is buried deep within the text.

Not ideal if your context is very short (under 8,000 tokens), if you're primarily doing code retrieval (like a GitHub repo Q&A), or if you need to use third-party software packages in the AI's execution environment.

data-analysis log-monitoring document-query information-extraction computational-reasoning
Maintenance 13 / 25
Adoption 8 / 25
Maturity 20 / 25
Community 9 / 25

How are scores calculated?

Stars

31

Forks

3

Language

Python

License

MIT

Last pushed

Mar 18, 2026

Commits (30d)

0

Dependencies

1

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/avilum/minrlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.