ExpertiseModel/MuTAP
MutAP: A prompt_based learning technique to automatically generate test cases with Large Language Model
This tool helps software developers automatically generate comprehensive unit test cases for their Python programs using Large Language Models (LLMs). It takes a program under test and an initial prompt (zero-shot or few-shot) as input, then generates test cases. These tests are iteratively refined and augmented to achieve better code coverage and identify more bugs, resulting in a more robust set of test cases.
No commits in the last 6 months.
Use this if you are a software developer looking to automate and improve the effectiveness of your unit test generation using advanced AI techniques, especially when traditional manual test writing is time-consuming or insufficient.
Not ideal if you need a tool for generating integration tests, end-to-end tests, or if you prefer to write all your test cases manually without AI assistance.
Stars
54
Forks
11
Language
Python
License
—
Category
Last pushed
Mar 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ExpertiseModel/MuTAP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
INPVLSA/probefish
A web-based LLM prompt and endpoint testing platform. Organize, version, test, and validate...
thabit-ai/thabit
Thabit is platform to evaluate prompts on multiple LLMs to determine the best one for your data
nicolay-r/llm-prompt-checking
Toolset for checking differences in recognising semantic relation presence by: (1) large...
alexandrughinea/lm-tiny-prompt-evaluation-framework
This project provides a tiny framework for testing different prompt versions with various AI...