aws-samples/sample-ollama-server

Ollama on GPU EC2 instance with Open WebUI web interface and Bedrock access

39
/ 100
Emerging

This project helps individuals and teams set up their own private environment for experimenting with and running large language models (LLMs). It provides a web interface where you can interact with various LLMs, including popular open-source models and Amazon Bedrock models. This is ideal for researchers, developers, or data scientists who need a secure, powerful platform to explore generative AI applications.

Use this if you need a dedicated, high-performance environment to run and interact with large language models, including both open-source and proprietary options like Amazon Bedrock, without managing complex infrastructure yourself.

Not ideal if you prefer using publicly available LLM services directly or if your primary need is basic text generation that doesn't require a dedicated GPU server.

generative-ai machine-learning-experimentation natural-language-processing AI-research cloud-computing
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 7 / 25

How are scores calculated?

Stars

25

Forks

2

Language

License

MIT-0

Last pushed

Mar 01, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/aws-samples/sample-ollama-server"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.