mangopy/SearchLM

Official code for NeurIPS2025 "Iterative Self-Incentivization Empowers Large Language Models as Agentic Searchers"

42
/ 100
Emerging

This project helps researchers and knowledge workers answer complex questions by transforming a large language model (LLM) into an 'agentic searcher.' It takes a natural language question and a vast document corpus (like Wikipedia) as input. The LLM then iteratively searches, selects key information, gathers evidence, and synthesizes a final, comprehensive answer, going beyond simple retrieval-augmented generation (RAG) by focusing on advanced reasoning.

225 stars.

Use this if you need to train an LLM to perform advanced, iterative information seeking and evidence-based answer generation from a large document collection.

Not ideal if you are looking for an out-of-the-box solution for basic document retrieval or if you don't have the technical expertise and computational resources to train and fine-tune large language models.

information-retrieval research-automation knowledge-synthesis question-answering scholarly-search
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

225

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Jan 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/mangopy/SearchLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.