citiususc/smarty-gpt

A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner to the end-users

54
/ 100
Established

This tool helps developers who integrate Large Language Models (LLMs) into their applications to manage and apply specific behaviors easily. It acts as a layer between your code and various LLMs (like ChatGPT or GPT-4), allowing you to define input prompts and contexts that guide the LLM's responses. The output is a biased or pre-conditioned response from the LLM based on your configured prompts, ideal for building applications that require consistent or domain-specific LLM interactions.

140 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are a developer looking for a streamlined way to apply consistent prompts and contexts to different LLMs within your applications, without needing to manage complex prompt engineering directly in your code.

Not ideal if you are an end-user without programming experience, as this is a library designed for software development and integration.

LLM integration prompt engineering application development AI model management API wrapper
Stale 6m
Maintenance 2 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

140

Forks

21

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Jun 11, 2025

Commits (30d)

0

Dependencies

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/citiususc/smarty-gpt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.