WecoAI/weco-cli
The Platform for Self-Improving Code. Ideal for GPU kernels, ML model development, feature engineering, prompt engineering, and other optimizable code.
Weco helps engineers and researchers automatically improve their code, specifically for tasks like optimizing GPU operations, refining machine learning models, or enhancing large language model prompts. You provide your existing code and an evaluation script that outputs a performance metric (e.g., latency, accuracy, win rate). Weco then iteratively modifies your code to maximize or minimize that metric, delivering a more performant version of your original code. This is for machine learning engineers, GPU developers, and prompt engineers looking to boost code efficiency or model quality.
Use this if you have code that needs to be systematically optimized based on specific performance metrics, and you want an automated process to explore improvements.
Not ideal if you're looking for a simple bug-fixing tool or a general code refactoring solution without a clear, quantifiable metric for improvement.
Stars
32
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/WecoAI/weco-cli"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shcherbak-ai/contextgem
ContextGem: Effortless LLM extraction from documents
mufeedvh/code2prompt
A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt...
ShahzaibAhmad05/gitree
An upgrade from "ls" for developers. An open-source tool to analyze folder structures and to...
nicepkg/ctxport
Copy AI conversations as clean Markdown Context Bundles — one click from ChatGPT, Claude,...
nikolay-e/treemapper
Export your entire codebase to ChatGPT/Claude in one command. Structure + contents in YAML/JSON...