abdullah85398/embedding-server

A high-performance, self-hosted, model-agnostic embedding service designed for LLM applications, RAG pipelines, and code intelligence tools. It serves as a drop-in replacement for OpenAI's embedding API while offering advanced features like native batching, smart chunking, and hardware acceleration.

30
/ 100
Emerging

If you're building applications that use large language models (LLMs) or retrieval-augmented generation (RAG) pipelines, this tool helps you efficiently convert text into numerical vectors (embeddings). You feed it text documents or queries, and it returns high-performance numerical representations. This is for developers or MLOps engineers creating AI-powered features like semantic search, content recommendations, or question-answering systems.

Use this if you need a flexible, self-hosted service to generate text embeddings for your LLM applications, offering better control, performance, and cost efficiency than external APIs.

Not ideal if you're a casual user just looking for a simple API call for a one-off embedding task and don't want to manage a server.

LLM-development RAG-pipelines semantic-search MLOps AI-application-deployment
No Package No Dependents
Maintenance 13 / 25
Adoption 4 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

MIT

Category

server

Last pushed

Mar 30, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/abdullah85398/embedding-server"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.