zhongshsh/MoExtend

ACL 2024 (SRW), Official Codebase of our Paper: "MoExtend: Tuning New Experts for Modality and Task Extension"

21
/ 100
Experimental

This project helps AI researchers and practitioners expand the capabilities of large language models (LLMs) to understand and process both text and image data. It takes an existing text-only LLM and integrates new 'experts' so it can handle visual information without costly retraining. The result is an LLM that can perform tasks requiring both language and vision, useful for those working on advanced AI applications.

No commits in the last 6 months.

Use this if you need to quickly adapt a pre-trained large language model to understand and integrate visual information, or extend its abilities to new multimodal tasks without starting from scratch.

Not ideal if you are looking for a pre-trained, ready-to-use multimodal model, as this project focuses on the framework for adapting existing models.

multimodal-AI large-language-models AI-model-adaptation computer-vision-integration AI-research-and-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Python

License

MIT

Last pushed

Dec 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zhongshsh/MoExtend"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.