Intelligent-CAT-Lab/PLTranslationEmpirical
Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code", In Proceedings of The 46th IEEE/ACM International Conference on Software Engineering (ICSE 2024), Lisbon, Portugal, April 2024
This project provides the tools and dataset to study how large language models (LLMs) introduce bugs when translating code from one programming language to another. It takes original code snippets and test cases in one language, along with the LLM's translated code, to help you analyze translation accuracy and identify common errors. Software engineers, researchers, and technical leads interested in the reliability of LLM-powered code translation would find this valuable.
No commits in the last 6 months.
Use this if you need to empirically evaluate the quality and bug-proneness of code translated by various large language models across different programming languages.
Not ideal if you are looking for a simple, production-ready tool to perform code translation without needing to analyze the translation process or underlying model performance.
Stars
51
Forks
10
Language
Python
License
MIT
Category
Last pushed
Apr 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Intelligent-CAT-Lab/PLTranslationEmpirical"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
rishub-tamirisa/tamper-resistance
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
tsinghua-fib-lab/ANeurIPS2024_SPV-MIA
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via...
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
codessian/epistemic-confidence-layer
Model-agnostic trust protocol for calibrated, auditable AI