DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"
This project helps information retrieval specialists improve how large language models (LLMs) understand and respond to search queries. It takes existing LLMs and fine-tunes them with a specialized dataset of instructions related to search tasks. The result is an enhanced LLM capable of better interpreting queries, understanding documents, and identifying relevant relationships between them, ultimately leading to more accurate search results.
207 stars.
Use this if you are a machine learning engineer or researcher developing search systems and want to leverage instruction tuning to significantly boost the performance of LLMs in information retrieval tasks.
Not ideal if you are looking for an off-the-shelf search engine for end-users, as this project provides tools and models for developers to build or enhance such systems.
Stars
207
Forks
14
Language
Python
License
MIT
Category
Last pushed
Feb 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DaoD/INTERS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...
TIGER-AI-Lab/VisualWebInstruct
The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web...