crazyyanchao/llmcompiler

LLMCompiler is an Agent Architecture designed to speed up the execution of agent tasks by executing them quickly in the DAG. It also saves the cost of redundant token use by reducing the number of calls to the LLM.

55
/ 100
Established

This tool helps AI engineers build intelligent agents that can handle complex, multi-step tasks efficiently. It takes a user's natural language request and a set of available tools, then generates an optimized plan to execute those tasks. The output is a completed task, quickly and cost-effectively, ideal for those working with large language models to automate workflows.

Available on PyPI.

Use this if you are building an AI agent that needs to perform many different actions or use numerous tools to fulfill a user's request, and you want to reduce execution time and computational costs.

Not ideal if your agent's tasks are very simple, involve only one or two tool calls, or if you are not concerned with optimizing execution speed or token usage.

AI Agent Development Large Language Model Orchestration Workflow Automation Intelligent Systems Tool Integration
Maintenance 6 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 16 / 25

How are scores calculated?

Stars

55

Forks

10

Language

Python

License

Apache-2.0

Last pushed

Nov 06, 2025

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/crazyyanchao/llmcompiler"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.