antgroup/ravig-bench

Official implementation of "RAViG-Bench: A Benchmark for Retrieval-Augmented Visually-rich Generation with Multi-modal Automated Evaluation"

28
/ 100
Experimental

This tool helps you assess the quality of automatically generated web content, specifically HTML and CSS code, that incorporates visual elements and information retrieved from other documents. You feed in the generated web page code, and it provides detailed reports on whether the code works correctly, if the visual design looks good, and if the content is accurate and complete. It's ideal for developers or researchers who are building and testing systems that create web pages with rich visual layouts based on retrieved information.

Use this if you need to rigorously evaluate the functional correctness, visual appeal, and informational accuracy of automatically generated HTML/CSS web content.

Not ideal if you're manually creating web pages or only need to validate simple HTML without visual design or content quality checks.

web-content-generation html-css-validation automated-ui-testing generative-ai-evaluation design-quality-assessment
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

Apache-2.0

Last pushed

Jan 29, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/antgroup/ravig-bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.