zhiyuanpeng/SPTAR

Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models

30
/ 100
Emerging

SPTAR helps improve how relevant information is found from large collections of text, like financial news or articles, when you have many search queries but not enough labeled examples to train a search system effectively. It takes an existing collection of documents and a set of queries, and enhances the search system's ability to pull out highly relevant documents, even for complex or nuanced queries. This tool is for professionals such as researchers, data scientists, or information retrieval specialists who need to build high-performing search systems.

No commits in the last 6 months.

Use this if you are working with large document collections and need to significantly improve the accuracy of your search or information retrieval system, especially when you have limited manually labeled query-document pairs.

Not ideal if you already have a perfectly performing retrieval system or if your information retrieval needs are very basic and don't require advanced augmentation techniques.

information-retrieval dense-retrieval search-systems natural-language-processing document-search
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

16

Forks

6

Language

Jupyter Notebook

License

Last pushed

Feb 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zhiyuanpeng/SPTAR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.