ExpertiseModel/MuTAP

MutAP: A prompt_based learning technique to automatically generate test cases with Large Language Model

33
/ 100
Emerging

This tool helps software developers automatically generate comprehensive unit test cases for their Python programs using Large Language Models (LLMs). It takes a program under test and an initial prompt (zero-shot or few-shot) as input, then generates test cases. These tests are iteratively refined and augmented to achieve better code coverage and identify more bugs, resulting in a more robust set of test cases.

No commits in the last 6 months.

Use this if you are a software developer looking to automate and improve the effectiveness of your unit test generation using advanced AI techniques, especially when traditional manual test writing is time-consuming or insufficient.

Not ideal if you need a tool for generating integration tests, end-to-end tests, or if you prefer to write all your test cases manually without AI assistance.

software-testing unit-testing test-automation code-quality python-development
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

54

Forks

11

Language

Python

License

Last pushed

Mar 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ExpertiseModel/MuTAP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.