aishwaryaprabhat/BigBertha
BigBertha is an architecture design that demonstrates how automated LLMOps (Large Language Models Operations) can be achieved on any Kubernetes cluster using open source container-native technologies 🌟
This project provides an architectural blueprint for setting up an automated system to manage and maintain Large Language Models (LLMs) in a production environment. It takes in unstructured data and monitors LLM performance, outputting alerts and automatically triggered retraining pipelines. This is for MLOps engineers, DevOps engineers, or platform teams responsible for deploying and managing AI applications.
No commits in the last 6 months.
Use this if you need a repeatable, automated way to keep your Large Language Models performing well by monitoring them, automatically retraining them when performance drops, and updating their knowledge base with new information.
Not ideal if you are looking for a pre-trained LLM or a simple API to integrate an LLM into an application, as this focuses on infrastructure and operations.
Stars
28
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/aishwaryaprabhat/BigBertha"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kserve/kserve
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable,...
omegaml/omegaml
MLOps simplified. One-stop AI delivery platform, all the features you need.
awslabs/aiops-modules
AIOps modules is a collection of reusable Infrastructure as Code (IaC) modules for Machine...
GoogleCloudDataproc/dataproc-ml-python
Library to simplify running distributed ML workloads with Apache Spark
jina-ai/serve
☁️ Build multimodal AI applications with cloud-native stack