LIANGQINGYUAN/Lyra

Lyra: A Benchmark for Turducken-Style Code Generation

22
/ 100
Experimental

This project provides a benchmark dataset and tools for evaluating how well AI models can generate Python code snippets that include SQL statements, based on natural language comments. It takes an English or Chinese comment describing a desired database operation and a Python function, and outputs the corresponding Python code with embedded SQL. Database developers, data scientists, and anyone who frequently writes Python code to interact with databases could use this.

No commits in the last 6 months.

Use this if you are a researcher or developer focused on improving AI models that automatically generate Python code containing SQL queries from natural language descriptions.

Not ideal if you are looking for an off-the-shelf tool to generate production-ready code; this is a benchmark for evaluating AI model performance, not an end-user code generation application.

code-generation natural-language-processing database-programming AI-benchmarking software-engineering-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

GPL-3.0

Last pushed

Apr 22, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/LIANGQINGYUAN/Lyra"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.