seehiong/micronaut-llama3
A high-performance Llama3 implementation using Micronaut and GraalVM Native Image
This project helps software developers integrate Llama3 large language model capabilities into their applications. It takes user prompts as input and generates text completions or handles chat interactions as output. This is for developers who need to embed Llama3's text generation or conversational AI features into a high-performance, resource-efficient backend service.
No commits in the last 6 months.
Use this if you are a backend developer building a new application or service that needs to include Llama3's text generation or chat functionalities, and you prioritize high performance and efficient resource usage.
Not ideal if you are an end-user looking for a ready-to-use application, or a developer who doesn't use Java, Micronaut, or GraalVM.
Stars
31
Forks
2
Language
Java
License
—
Category
Last pushed
Feb 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/seehiong/micronaut-llama3"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.