YerbaPage/SWE-Exp
SWE-Exp: Experience-Driven Software Issue Resolution
This project helps software developers automate the process of fixing code issues and optimizing their software. It takes in descriptions of software problems and historical code resolution attempts, then outputs suggested code changes and problem-solving strategies. Developers can use this to quickly identify and apply solutions to new software bugs or for code improvements.
Use this if you are a software developer frequently dealing with recurring code issues or looking to leverage past problem-solving knowledge to accelerate new bug fixes and optimizations.
Not ideal if you are looking for a tool that requires no prior data or if your development workflow does not involve analyzing historical code changes and their resolutions.
Stars
37
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/YerbaPage/SWE-Exp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems