nebuly-ai/optimate

A collection of libraries to optimise AI model performances

45
/ 100
Emerging

This suite of tools helps AI/ML engineers make their models run faster and more efficiently. You provide your AI models and hardware setup, and it helps you get optimized models that use fewer resources and incur lower inference costs. It's for machine learning engineers, MLOps specialists, and data scientists looking to improve the operational performance of their AI systems.

8,349 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or MLOps specialist looking to make your AI models, especially large language models (LLMs), run more cost-effectively on GPUs or CPUs.

Not ideal if you need active support, ongoing updates, or a beginner-friendly solution for general AI model development, as this project is no longer maintained.

AI-model-optimization MLOps GPU-utilization LLM-deployment inference-cost-reduction
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

8,349

Forks

624

Language

Python

License

Apache-2.0

Last pushed

Jul 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/nebuly-ai/optimate"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.