oswaldoludwig/visually-informed-embedding-of-word-VIEW-
Visually informed embedding of word (VIEW) is a tool for transferring multimodal background knowledge to NLP algorithms.
This project helps natural language processing (NLP) researchers and practitioners improve how their algorithms understand spatial relationships described in text. It takes textual descriptions of visual scenes (like image captions) and produces specialized word embeddings that capture visual and spatial context. These embeddings can then be combined with standard word embeddings to enhance algorithms that need to interpret where objects are in relation to each other.
No commits in the last 6 months.
Use this if you are developing or evaluating NLP models, especially for tasks like Spatial Role Labeling, and need to improve their comprehension of spatial language.
Not ideal if your NLP task does not involve understanding spatial relationships between objects, or if you prefer to use pre-trained models without custom training and embedding generation.
Stars
29
Forks
11
Language
Python
License
BSD-2-Clause
Category
Last pushed
Sep 18, 2016
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/oswaldoludwig/visually-informed-embedding-of-word-VIEW-"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cosmosgl/graph
GPU-accelerated force graph layout and rendering
Clay-foundation/model
The Clay Foundation Model - An open source AI model and interface for Earth
nomic-ai/nomic
Nomic Developer API SDK
omoindrot/tensorflow-triplet-loss
Implementation of triplet loss in TensorFlow
sashakolpakov/dire-jax
DImensionality REduction in JAX