makit/makit-llm-lambda

Example showing how to run a LLM fully inside an AWS Lambda Function

36
/ 100
Emerging

This helps developers integrate and run a Large Language Model (LLM) like GPT4ALL directly within an AWS Lambda serverless function. You provide the pre-trained LLM model file and a text prompt via an HTTP call, and it returns a generated text response. This is primarily for backend developers or cloud architects looking to deploy custom LLMs without managing dedicated servers.

No commits in the last 6 months.

Use this if you are a backend developer or cloud architect who needs to host a custom, open-source LLM for basic text generation in a cost-effective, serverless environment.

Not ideal if you need a high-performance, low-latency LLM for complex, real-time applications or if your model is too large for Lambda's memory constraints.

serverless-architecture cloud-deployment backend-development AI-model-hosting cost-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

23

Forks

4

Language

Dockerfile

License

MIT

Last pushed

Jan 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/makit/makit-llm-lambda"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.