speedyk-005/chunklet-py
One library to split them all: Sentence, Code, Docs. Chunk smarter, not harder — built for LLMs, RAG pipelines, and beyond.
This tool helps AI engineers and researchers prepare various types of text, documents, and code for use in large language models (LLMs) and retrieval-augmented generation (RAG) systems. It takes in raw text, PDFs, Word documents, code files, and more, then intelligently breaks them down into smaller, meaningful, and context-rich pieces. The output is 'chunked' data that preserves meaning and structure, along with valuable metadata for better AI performance.
Used by 1 other package. Available on PyPI.
Use this if you need to precisely and intelligently break down large amounts of text, documents, or code into smaller, context-aware segments for AI applications like LLMs and RAG.
Not ideal if you only need basic text splitting by character count or random line breaks, or if your primary audience is not an AI/ML developer or researcher.
Stars
64
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Dependencies
12
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/speedyk-005/chunklet-py"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
chonkie-inc/chonkie
🦛 CHONK docs with Chonkie ✨ — The lightweight ingestion library for fast, efficient and robust...
jchunk-io/jchunk
JChunk is a lightweight and flexible library designed to provide multiple strategies for text...
andreshere00/Splitter_MR
Chunk your data into markdown text blocks for your LLM applications
chonkie-inc/chonkiejs
🦛 CHONK your texts with Chonkie ✨ Type-friendly, light-weight, fast and super-simple chunking library
thom-heinrich/chonkify
Extractive document compression for RAG and agent pipelines. +69% vs LLMLingua, +175% vs...