avilum/minrlm
Token-efficient Recursive Language Model. 3.6x fewer tokens than vanilla LLMs. Data never enters the prompt.
This helps people who work with large text documents, log files, or datasets by using AI to analyze them more efficiently and accurately. You provide a large amount of information and a specific question, and it gives you a precise answer, even if the data is massive. Data analysts, operations engineers, or researchers dealing with extensive text-based information would find this useful.
Used by 1 other package. Available on PyPI.
Use this if you need to analyze large documents, logs, or datasets with an AI and want to save on cost and ensure accuracy, especially when the relevant information is buried deep within the text.
Not ideal if your context is very short (under 8,000 tokens), if you're primarily doing code retrieval (like a GitHub repo Q&A), or if you need to use third-party software packages in the AI's execution environment.
Stars
31
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Dependencies
1
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/avilum/minrlm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
hassancs91/SimplerLLM
Simplify interactions with Large Language Models
tylerelyt/LLM-Workshop
🌟 Learn Large Language Model development through hands-on projects and real-world implementations
kyegomez/SingLoRA
This repository provides a minimal, single-file implementation of SingLoRA (Single Matrix...
NetEase-Media/grps_trtllm
Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM...
parvbhullar/superpilot
LLMs based multi-model framework for building AI apps.