ZO-Bench/ZO-LLM

[ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".

42
/ 100
Emerging

This project helps machine learning researchers fine-tune large language models (LLMs) more efficiently, especially when memory resources are limited. It takes a pre-trained LLM and a specific task dataset, then applies various zeroth-order optimization methods to produce a fine-tuned LLM that performs well on that task with reduced memory consumption. Researchers focused on LLM performance and resource optimization would use this.

124 stars. No commits in the last 6 months.

Use this if you are a researcher exploring or implementing memory-efficient ways to fine-tune large language models for tasks like classification, question-answering, or commonsense reasoning.

Not ideal if you are a practitioner looking for a ready-to-use, high-level tool for LLM fine-tuning without delving into optimization algorithms.

LLM fine-tuning memory optimization natural language processing research machine learning research computational resource management
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

124

Forks

15

Language

Python

License

GPL-3.0

Last pushed

Jul 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ZO-Bench/ZO-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.