alexander-moore/vlm

Composition of Multimodal Language Models From Scratch

24
/ 100
Experimental

This project helps AI researchers and machine learning engineers explore how to combine existing large language models with image encoders to create new multimodal AI systems. It takes a pre-trained LLM and an image encoder, adds an adapter module, and trains this new component to allow the LLM to understand visual information without retraining the core LLM. Researchers focused on advancing multimodal AI capabilities would use this to build and experiment with novel vision-language models.

No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer looking to build and experiment with multimodal large language models from scratch, specifically focusing on integrating visual understanding into existing LLM architectures.

Not ideal if you are an end-user seeking a pre-built, ready-to-use multimodal AI application or if you are not deeply involved in foundational AI model development.

AI Research Machine Learning Engineering Multimodal AI Large Language Models Computer Vision
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

15

Forks

2

Language

Jupyter Notebook

License

Last pushed

Aug 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/alexander-moore/vlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.