sshh12/multi_token
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
This project helps you feed diverse types of information, like documents, images, audio, or video, into a large language model. You provide these various data inputs alongside your questions, and the model then processes them to give you comprehensive answers or summaries. It's designed for anyone who needs to ask complex questions that require understanding information from multiple formats, beyond just plain text.
191 stars. No commits in the last 6 months.
Use this if you need a language model to analyze or respond to prompts that include images, sounds, documents, or videos, not just text.
Not ideal if you only work with text-based data or require highly specialized, high-fidelity analysis for a single modality like only speech recognition or document summarization.
Stars
191
Forks
16
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sshh12/multi_token"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice