LLukas22/llm-rs-python
Unofficial python bindings for the rust llm library. πβ€οΈπ¦
This project helps developers integrate large language models (LLMs) like LLama and GPT-NeoX into their Python applications. It takes various LLM models as input and allows for local text generation, streaming, and conversion directly on your CPU or GPU, making it easier to build AI features. It's designed for developers working on machine learning projects who need to deploy and run LLMs efficiently.
No commits in the last 6 months. Available on PyPI.
Use this if you are a Python developer who needs to run a variety of large language models locally on your hardware, with optional GPU acceleration, and want to integrate them into frameworks like LangChain or Haystack.
Not ideal if you are an end-user without programming experience, or if you prefer cloud-based LLM services over local deployment and management.
Stars
76
Forks
4
Language
Python
License
MIT
Category
Last pushed
Aug 19, 2023
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/LLukas22/llm-rs-python"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (γ«γ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (π¦LLaMA/LLaVA) on your local device efficiently.