laelhalawani/gguf_modeldb

A quick and optimized solution to manage llama based gguf quantized models, download gguf files, retreive messege formatting, add more models from hf repos and more. It's super easy to use and comes prepacked with best preconfigured open source models: dolphin phi-2 2.7b, mistral 7b v0.2, mixtral 8x7b v0.1, solar 10.7b and zephyr 3b

45
/ 100
Emerging

This tool helps developers working with Large Language Models (LLMs) to easily find, download, and manage GGUF-quantized Llama-based models. You input a desired model name and quantization level, and it provides the model file path and correct message formatting tags, ready for use with local inference engines like llama-cpp-python. This is designed for software developers building applications that use local LLMs.

Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you are a developer who regularly works with GGUF-quantized LLMs and needs a streamlined way to manage and integrate them into your projects.

Not ideal if you are an end-user looking for a simple application to chat with LLMs without any programming.

Large Language Models LLM development model management GGUF models local AI inference
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 14 / 25

How are scores calculated?

Stars

12

Forks

3

Language

Python

License

Last pushed

Jan 13, 2024

Commits (30d)

0

Dependencies

3

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/laelhalawani/gguf_modeldb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.