MaurerKrisztian/two-step-llm-tool-call
Make LLM Tools Work Better and Cheaper with a Two-Step Tool Call
When working with large language models (LLMs) like OpenAI's GPT and utilizing their tool-calling feature, you might find your API costs increasing due to sending detailed function definitions with every request. This project provides a method to reduce those costs and improve performance by only sending detailed function instructions to the LLM when they are actually needed. It takes your initial request and a list of available functions, then outputs the LLM's response at a lower cost.
No commits in the last 6 months.
Use this if you are a developer integrating LLMs with many custom tools or functions and want to optimize API costs and model performance.
Not ideal if you are using LLMs for simple, single-turn interactions without integrating a large number of custom tools.
Stars
19
Forks
—
Language
TypeScript
License
—
Category
Last pushed
Mar 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/MaurerKrisztian/two-step-llm-tool-call"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mozilla-ai/any-llm
Communicate with an LLM provider using a single interface
Maximilian-Winter/llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models...
CliDyn/climsight
A next-generation climate information system that uses large language models (LLMs) alongside...
ShishirPatil/gorilla
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
OoriData/OgbujiPT
Client-side toolkit for using large language models, including where self-hosted