YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
This tool helps developers and AI engineers efficiently manage very long codebases when working with code language models. It takes a large code input and a specific query (like a bug report or feature request) and outputs a much shorter, but still highly relevant, code snippet. This ensures the language model focuses on the most critical parts of the code without getting overwhelmed, making it faster and more accurate for tasks like code completion, bug fixing, or refactoring.
142 stars.
Use this if you need to feed large code files into a code language model but want to reduce the input size without losing important context related to a specific task or query.
Not ideal if you primarily work with short, isolated code snippets or if your code language model can already handle extremely long contexts efficiently without performance degradation.
Stars
142
Forks
25
Language
Python
License
MIT
Category
Last pushed
Feb 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YerbaPage/LongCodeZip"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
locuslab/wanda
A simple and effective LLM pruning approach.
tommasomncttn/mergenetic
Flexible library for merging large language models (LLMs) via evolutionary optimization (ACL 2025 Demo).