lil-lab/nlvr

Cornell NLVR and NLVR2 are natural language grounding datasets. Each example shows a visual input and a sentence describing it, and is annotated with the truth-value of the sentence.

40
/ 100
Emerging

This project helps researchers working on AI to teach systems how to understand the relationship between language and images. You provide a dataset containing visual inputs (like images) and corresponding sentences, and the project helps determine if the sentences accurately describe what's in the images. It's designed for AI/NLP researchers, computational linguists, and computer vision scientists developing new models for visual reasoning.

267 stars. No commits in the last 6 months.

Use this if you are a researcher developing AI models that need to reason about sets of objects, comparisons, and spatial relations between natural language descriptions and visual content.

Not ideal if you are looking for a tool to process and analyze images or text for business applications like content moderation or image search, as this is a research dataset.

AI-research computational-linguistics computer-vision natural-language-processing visual-reasoning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 22 / 25

How are scores calculated?

Stars

267

Forks

60

Language

HTML

License

Last pushed

Aug 18, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/lil-lab/nlvr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.