kyegomez/ScreenAI

Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"

61
/ 100
Established

This project helps developers integrate a vision-language model to understand user interfaces and infographics. It takes an image (like a screenshot or a chart) and associated text, then processes them to provide an interpretable output about their content and relationship. It's designed for engineers building applications that need to interpret visual and textual data from screens or complex diagrams.

380 stars. Available on PyPI.

Use this if you are a developer creating applications that need to programmatically understand and extract information from screenshots, app interfaces, or detailed infographics by combining visual and textual input.

Not ideal if you are an end-user looking for a ready-to-use application to analyze UIs or infographics without programming.

UI-understanding image-analysis natural-language-processing developer-tool AI-model-integration
Maintenance 10 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 16 / 25

How are scores calculated?

Stars

380

Forks

36

Language

Python

License

MIT

Last pushed

Feb 06, 2026

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kyegomez/ScreenAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.