amirdeljouyi/UTGen
Replication package of the ICSE2025 paper titled "Leveraging Large Language Models for Enhancing the Understandability of Generated Unit Tests"
This tool helps software quality assurance engineers and developers improve the clarity and understandability of automatically generated unit tests. It takes existing unit test code and processes it using a large language model to produce enhanced versions that are easier for humans to read and comprehend. The end-users are primarily software developers and QA teams who need to maintain or debug automatically generated test suites.
No commits in the last 6 months.
Use this if you are a software developer or QA engineer looking to make your automatically generated Java unit tests more understandable and maintainable.
Not ideal if you are looking for a tool to generate tests from scratch or if you are working with languages other than Java.
Stars
11
Forks
3
Language
Java
License
Apache-2.0
Category
Last pushed
Feb 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/amirdeljouyi/UTGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
google/litmus
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application...