jzhoubu/vsearch
An Extensible Framework for Retrieval-Augmented LLM Applications: Learning Relevance Beyond Simple Similarity.
This project helps anyone working with large collections of documents or multi-modal data (like text and images) to find the most relevant information efficiently. You input your search query and a dataset, and it provides highly accurate results by understanding the nuances of language. This is ideal for researchers, analysts, or content managers needing precise search capabilities.
No commits in the last 6 months.
Use this if you need to build advanced search systems that go beyond simple keyword matching, especially when dealing with large volumes of text or mixed data types.
Not ideal if you just need a basic search function for a small, static dataset or if you are looking for a simple keyword-based search engine.
Stars
41
Forks
1
Language
Python
License
MIT
Category
Last pushed
Dec 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/jzhoubu/vsearch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ewok-core/ewok-paper
Elements of World Knowledge! This repository houses data and code needed to replicate our first...
itrummer/thalamusdb
ThalamusDB: semantic query processing on multimodal data
texttron/hyde
HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
ArslanKAS/Large-Language-Models-with-Semantic-Search
Explore from keyword search to dense retrieval and reranking, which injects the intelligence of...
Ahren09/SciEvo
A longitudinal dataset for academic literature, including papers, metadata, and citation graphs,...