bold84/cot_proxy

Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and tag filtering. Perfect for using advanced models with apps that lack parameter customization.

36
/ 100
Emerging

This tool helps teams and individuals better manage their interactions with Large Language Models, especially when using complex models like Qwen3 that have different modes for 'thinking' or 'normal' operation. It takes your application's standard LLM requests and intelligently modifies them—adding specific parameters, altering prompts, or cleaning up verbose model outputs—before sending them to the LLM. Marketers, researchers, or anyone using multiple applications with advanced LLMs will find this useful for standardizing interactions and getting cleaner, more relevant results.

No commits in the last 6 months.

Use this if your applications or workflows struggle to get consistent or optimal responses from advanced Large Language Models because they lack the ability to customize model parameters or manage complex model modes.

Not ideal if you primarily interact with LLMs directly through their native interfaces or if your applications already offer extensive parameter customization for your specific models.

AI-workflow-management LLM-operations content-generation AI-tool-integration prompt-engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

51

Forks

5

Language

Python

License

MIT

Category

llm-api-gateways

Last pushed

May 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/bold84/cot_proxy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.