IntelLabs/RAG-FiT

Framework for enhancing LLMs for RAG tasks using fine-tuning.

48
/ 100
Emerging

This framework helps AI developers improve how well Large Language Models (LLMs) answer questions using external knowledge. It takes your existing RAG (Retrieval Augmented Generation) technique and a dataset, then generates specialized data for fine-tuning your LLM. The output is a more accurate LLM and detailed metrics showing its improved performance in RAG tasks.

767 stars.

Use this if you are an AI engineer or researcher working with LLMs and want to systematically fine-tune them to perform better when retrieving and using external information for generating responses.

Not ideal if you are looking for an off-the-shelf solution for RAG without needing to fine-tune models or if you are not comfortable with model training workflows.

AI-development LLM-fine-tuning retrieval-augmented-generation model-evaluation natural-language-processing
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

767

Forks

61

Language

Python

License

Apache-2.0

Last pushed

Dec 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/IntelLabs/RAG-FiT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.