prajwal10001/semantic-chunker-langchain
Token-aware, LangChain-compatible semantic chunker with PDF, markdown, and layout support
When working with large documents like PDFs or markdown files for AI models, this tool breaks down the text into smaller, meaningful pieces. It takes your extensive documents and outputs 'chunks' of text that fit within an AI model's limits, ensuring important context isn't lost. This is for anyone building or working with AI applications who needs to process long texts efficiently.
No commits in the last 6 months.
Use this if you need to feed long documents into an AI model but are encountering token limits, and you want to ensure the AI still understands the full context.
Not ideal if you're only working with very short texts or if you don't need to integrate with AI models.
Stars
13
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jun 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/prajwal10001/semantic-chunker-langchain"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mirth/chonky
Fully neural approach for text chunking
sentencizer/sentencizer
A sentence splitting (sentence boundary disambiguation) library for Go. It is rule-based and...
jackfsuia/bert-chunker
bert-chunker: efficient and trained chunking for unstructured documents. 训练Bert做文档分段.
bgokden/fast-text-splitter
fast text splitter with onnx