claws-lab/projection-in-MLLMs

Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'

19
/ 100
Experimental

This project helps AI researchers and practitioners understand how Multimodal Large Language Models (MLLMs) process visual information. It provides code and datasets to analyze how visual attributes are translated into textual space within these models, using datasets from domains like agriculture, dermatology, and humanitarian response. Users input images and their associated labels to fine-tune and evaluate MLLMs.

No commits in the last 6 months.

Use this if you are a researcher or advanced practitioner working with MLLMs and want to rigorously evaluate and understand how these models integrate visual data with language.

Not ideal if you are looking for a ready-to-use application or a general-purpose MLLM for immediate deployment, as this is a research-focused toolkit.

AI Research Multimodal AI Large Language Models Computer Vision Machine Learning Evaluation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Python

License

Last pushed

Jul 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/claws-lab/projection-in-MLLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.