ssisOneTeam/Korean-Embedding-Model-Performance-Benchmark-for-Retriever
Korean Sentence Embedding Model Performance Benchmark for RAG
This project helps improve the accuracy of RAG (Retrieval Augmented Generation) systems designed for Korean public welfare services. It takes Korean welfare policy documents and generates specialized question-and-answer datasets, then uses them to evaluate and fine-tune various Korean embedding models. The output is a benchmark of which Korean embedding models perform best for retrieving relevant information within the welfare domain. This is for RAG system developers, AI researchers, or data scientists working on Korean natural language processing applications, specifically in the public service sector.
No commits in the last 6 months.
Use this if you are building a Korean RAG system for public welfare information and need to identify the best performing embedding models to ensure accurate retrieval of answers.
Not ideal if your RAG system is not focused on Korean language, public welfare, or if you are not interested in benchmarking different embedding model performances.
Stars
50
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/ssisOneTeam/Korean-Embedding-Model-Performance-Benchmark-for-Retriever"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems