teticio/openai-proxy

OpenAI API proxy for fine-grained cost tracking & control and caching of responses

31
/ 100
Emerging

This project helps engineering and product teams manage their OpenAI API expenses more effectively. It acts as a middleman, taking your API requests and passing them to OpenAI, while tracking costs by user, project, and the specific AI model used. This allows team leads and CTOs to gain visibility and control over how much is spent on different initiatives and by whom.

No commits in the last 6 months.

Use this if you need to track and limit OpenAI API costs across different projects, users, or development stages within your organization, or if you want to cache responses to save on repeated calls.

Not ideal if you are an individual user with simple API needs and don't require detailed cost tracking or management for multiple projects/users.

API-cost-management budgeting resource-governance AI-application-development finops
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

17

Forks

2

Language

Python

License

BSD-3-Clause

Last pushed

Mar 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/teticio/openai-proxy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.