mangopy/SearchLM
Official code for NeurIPS2025 "Iterative Self-Incentivization Empowers Large Language Models as Agentic Searchers"
This project helps researchers and knowledge workers answer complex questions by transforming a large language model (LLM) into an 'agentic searcher.' It takes a natural language question and a vast document corpus (like Wikipedia) as input. The LLM then iteratively searches, selects key information, gathers evidence, and synthesizes a final, comprehensive answer, going beyond simple retrieval-augmented generation (RAG) by focusing on advanced reasoning.
225 stars.
Use this if you need to train an LLM to perform advanced, iterative information seeking and evidence-based answer generation from a large document collection.
Not ideal if you are looking for an out-of-the-box solution for basic document retrieval or if you don't have the technical expertise and computational resources to train and fine-tune large language models.
Stars
225
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/mangopy/SearchLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
denser-org/denser-retriever
An enterprise-grade AI retriever designed to streamline AI integration into your applications,...
rayliuca/T-Ragx
Enhancing Translation with RAG-Powered Large Language Models
neuml/rag
🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with...
NovaSearch-Team/RAG-Retrieval
Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.
RulinShao/retrieval-scaling
Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".