robtacconelli/Nacrith-GPU
Nacrith — Lossless text compression via ensemble neural arithmetic coding. Combines SmolLM2-135M language model with context mixing, adaptive prediction, and high-precision CDF coding. 3.1x better than gzip, and outperforming CMIX, ts_zip, LLMZip and FineZip. Fully lossless. GPU accelerated.
Nacrith GPU helps you dramatically shrink the size of your text data without losing any information. It takes plain text, uses a small AI model to 'understand' the language, and outputs a much smaller compressed file. This tool is perfect for anyone managing large volumes of text, like researchers archiving documents or data professionals storing logs.
Use this if you need to store or transmit large text files and want the absolute best compression ratios, significantly outperforming standard tools like gzip or zip.
Not ideal if your primary concern is compression speed over file size, or if you don't have access to a GPU.
Stars
17
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/robtacconelli/Nacrith-GPU"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tencent/AngelSlim
Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.
nebuly-ai/optimate
A collection of libraries to optimise AI model performances
antgroup/glake
GLake: optimizing GPU memory management and IO transmission.
kyo-takano/chinchilla
A toolkit for scaling law research ⚖
liyucheng09/Selective_Context
Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40%...