ollama-benchmark and llm-optimizer-benchmark

ollama-benchmark
53
Established
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 6/25
Adoption 8/25
Maturity 15/25
Community 9/25
Stars: 345
Forks: 41
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 56
Forks: 4
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About ollama-benchmark

aidatatools/ollama-benchmark

LLM Benchmark for Throughput via Ollama (Local LLMs)

This tool helps you quickly understand the real performance of your local Large Language Models (LLMs) running via Ollama. It takes your existing local LLM setup and provides a clear tokens-per-second metric. AI/ML practitioners, researchers, or anyone experimenting with local LLMs can use this to assess different models and hardware configurations.

local-LLMs machine-learning-operations AI-performance-tuning model-evaluation LLM-deployment

About llm-optimizer-benchmark

epfml/llm-optimizer-benchmark

Benchmarking Optimizers for LLM Pretraining

This project offers a standardized way to compare different optimization techniques used in training Large Language Models (LLMs). It takes various optimizer configurations, model sizes, and training durations as input and produces benchmark results showing which optimizer performs best under specific conditions. LLM researchers and practitioners would use this to inform their choice of optimization methods for pretraining LLMs.

LLM pretraining Deep Learning optimization Model development AI research Language model engineering

Scores updated daily from GitHub, PyPI, and npm data. How scores work