SafellmHub/hguard-go
Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.
This is a tool for developers who are building applications powered by large language models (LLMs). It helps ensure the LLM interacts safely and reliably with external tools by validating and enforcing rules on its 'tool calls'. Developers can define what tools an LLM can use, under what conditions, and what parameters are allowed, providing a safety net for AI integrations.
No commits in the last 6 months.
Use this if you are a developer building an LLM-powered application and need to prevent the LLM from making inappropriate, incorrect, or 'hallucinated' calls to external tools or APIs.
Not ideal if you are an end-user looking for a no-code solution to manage LLM behavior, or if you are not developing with Go.
Stars
7
Forks
—
Language
Go
License
MIT
Category
Last pushed
Jul 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/SafellmHub/hguard-go"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...