efficientscaling/Z1
[EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"
This project improves how Large Language Models (LLMs) reason, especially when faced with complex problems that require a series of logical steps. It takes a problem statement (or prompt) and enhances the LLM's ability to 'think' through it more effectively by generating intermediate steps or code. The output is a more accurate and robust final answer. This is primarily useful for AI researchers and practitioners working with advanced LLM applications.
No commits in the last 6 months.
Use this if you are working with Large Language Models and need to improve their accuracy and reasoning capabilities, especially for tasks requiring multi-step thought processes or code generation.
Not ideal if you are a general user looking for an off-the-shelf application or if your primary need is for simple, direct text generation without complex reasoning.
Stars
68
Forks
2
Language
Python
License
—
Category
Last pushed
Apr 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/efficientscaling/Z1"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase