chonkie and chonkify
These are **competitors**: both perform document chunking for RAG pipelines, with Chonkie offering a mature, production-ready ingestion library while Chonkify focuses specifically on extractive compression as an alternative approach to information retention.
About chonkie
chonkie-inc/chonkie
🦛 CHONK docs with Chonkie ✨ — The lightweight ingestion library for fast, efficient and robust RAG pipelines
This is a lightweight tool for developers building Retrieval-Augmented Generation (RAG) applications. It takes various forms of text data, processes it by intelligently splitting it into smaller, meaningful parts (chunks), and then refines and embeds these chunks. The output is optimized text chunks ready to be stored in a vector database for efficient retrieval by large language models.
About chonkify
thom-heinrich/chonkify
Extractive document compression for RAG and agent pipelines. +69% vs LLMLingua, +175% vs LLMLingua2 on information recovery. Compiled wheels, try it out.
Builds document units scored through 768-dimensional embeddings and selects the highest-ranked segments to stay within token budgets while maximizing factual recovery—critical for quantitative research and reasoning traces where exact facts outweigh fluent paraphrasing. Supports multiple embedding backends including Azure OpenAI, OpenAI-compatible APIs, and fully offline local SentenceTransformers, with a CLI and Python API for RAG pipelines and agent memory systems. Ships as compiled extension modules for performance-sensitive workloads across Linux, Windows, and macOS platforms.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work