apenab/pyrlm-runtime

Minimal runtime for Recursive Language Models (RLMs) inspired by the MIT CSAIL paper "Recursive Language Models".

32
/ 100
Emerging

This project helps developers overcome the challenge of processing extremely large text documents or datasets with Large Language Models (LLMs) without hitting token limits. It takes your documents and a question, then uses an LLM to generate Python code to navigate and analyze the context. The output is a concise answer to your query, allowing you to work with massive information volumes efficiently. It is designed for developers who build applications relying on LLMs for complex information extraction or summarization.

Use this if you are building an application where LLMs need to analyze vast amounts of text data (e.g., hundreds of documents, millions of tokens) that would typically exceed LLM context window limits.

Not ideal if your use case involves short, single-turn LLM queries on small to medium-sized texts, or if you prefer a simpler, less code-centric approach to LLM prompting.

LLM-application-development long-context-processing information-retrieval developer-tools text-analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/apenab/pyrlm-runtime"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.