orionw/FollowIR

FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions

16
/ 100
Experimental

This project helps evaluate and improve how well information retrieval models follow specific instructions when searching for documents. It takes a query and additional instructions as input, then measures how accurately the model ranks relevant documents. The end-users are primarily developers and researchers working on search technologies and large language models, aiming to make these systems more responsive to complex user directives.

No commits in the last 6 months.

Use this if you are developing or evaluating information retrieval models and need to assess their ability to understand and execute nuanced search instructions.

Not ideal if you are an end-user simply looking for a search engine to use, rather than a tool for evaluating and training search models.

information-retrieval natural-language-processing search-technology model-evaluation machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

52

Forks

Language

Python

License

Last pushed

Jul 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/orionw/FollowIR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.