ZO-Bench/ZO-LLM
[ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".
This project helps machine learning researchers fine-tune large language models (LLMs) more efficiently, especially when memory resources are limited. It takes a pre-trained LLM and a specific task dataset, then applies various zeroth-order optimization methods to produce a fine-tuned LLM that performs well on that task with reduced memory consumption. Researchers focused on LLM performance and resource optimization would use this.
124 stars. No commits in the last 6 months.
Use this if you are a researcher exploring or implementing memory-efficient ways to fine-tune large language models for tasks like classification, question-answering, or commonsense reasoning.
Not ideal if you are a practitioner looking for a ready-to-use, high-level tool for LLM fine-tuning without delving into optimization algorithms.
Stars
124
Forks
15
Language
Python
License
GPL-3.0
Category
Last pushed
Jul 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ZO-Bench/ZO-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scaleapi/llm-engine
Scale LLM Engine public repository
AGI-Arena/MARS
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models
modelscope/easydistill
a toolkit on knowledge distillation for large language models
AGI-Edgerunners/LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient...
Wang-ML-Lab/bayesian-peft
Bayesian Low-Rank Adaptation of LLMs: BLoB [NeurIPS 2024] and TFB [NeurIPS 2025]