LIANGQINGYUAN/Lyra
Lyra: A Benchmark for Turducken-Style Code Generation
This project provides a benchmark dataset and tools for evaluating how well AI models can generate Python code snippets that include SQL statements, based on natural language comments. It takes an English or Chinese comment describing a desired database operation and a Python function, and outputs the corresponding Python code with embedded SQL. Database developers, data scientists, and anyone who frequently writes Python code to interact with databases could use this.
No commits in the last 6 months.
Use this if you are a researcher or developer focused on improving AI models that automatically generate Python code containing SQL queries from natural language descriptions.
Not ideal if you are looking for an off-the-shelf tool to generate production-ready code; this is a benchmark for evaluating AI model performance, not an end-user code generation application.
Stars
15
Forks
—
Language
Python
License
GPL-3.0
Category
Last pushed
Apr 22, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/LIANGQINGYUAN/Lyra"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
k4black/codebleu
Pip compatible CodeBLEU metric implementation available for linux/macos/win
LiveCodeBench/LiveCodeBench
Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of...
EdinburghNLP/code-docstring-corpus
Preprocessed Python functions and docstrings for automated code documentation (code2doc) and...
hendrycks/apps
APPS: Automated Programming Progress Standard (NeurIPS 2021)
solis-team/Hydra
[FSE 2026] Do Not Treat Code as Natural Language: Implications for Repository-Level Code...