chr15m/runprompt

Run LLM prompts from your shell

44
/ 100
Emerging

This tool helps anyone who needs to quickly get responses from Large Language Models (LLMs) right from their command line. You provide a `.prompt` file containing your request and any necessary setup, then feed in your data (like text or JSON). It outputs the LLM's response, which can be free-form text or structured JSON, making it easy for operations engineers, data analysts, or researchers to automate text processing tasks.

430 stars.

Use this if you need to run pre-defined LLM prompts, pass in dynamic data, and get structured outputs directly from your shell, especially for scripting or automating text-based tasks.

Not ideal if you're looking for a graphical user interface (GUI) or a full-fledged application development framework for building complex AI-powered systems.

command-line-automation text-processing scripting data-extraction content-generation
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 11 / 25

How are scores calculated?

Stars

430

Forks

17

Language

Python

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/chr15m/runprompt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.