MaurerKrisztian/two-step-llm-tool-call

Make LLM Tools Work Better and Cheaper with a Two-Step Tool Call

14
/ 100
Experimental

When working with large language models (LLMs) like OpenAI's GPT and utilizing their tool-calling feature, you might find your API costs increasing due to sending detailed function definitions with every request. This project provides a method to reduce those costs and improve performance by only sending detailed function instructions to the LLM when they are actually needed. It takes your initial request and a list of available functions, then outputs the LLM's response at a lower cost.

No commits in the last 6 months.

Use this if you are a developer integrating LLMs with many custom tools or functions and want to optimize API costs and model performance.

Not ideal if you are using LLMs for simple, single-turn interactions without integrating a large number of custom tools.

LLM application development API cost optimization AI tool integration prompt engineering developer tools
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

19

Forks

Language

TypeScript

License

Last pushed

Mar 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/MaurerKrisztian/two-step-llm-tool-call"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.