EvolvingLMMs-Lab/Otter
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
This tool helps researchers and AI developers work with advanced multimodal AI models. It allows you to input various combinations of images, videos, and text to train models that can understand and generate responses based on these diverse inputs. The primary users are AI/ML researchers focused on developing and evaluating large multimodal models.
3,344 stars. No commits in the last 6 months.
Use this if you are an AI researcher or developer building or evaluating advanced AI models that need to process and understand both visual (images, video) and text information simultaneously, especially for tasks requiring detailed interpretation and instruction following.
Not ideal if you are a non-technical user looking for a ready-to-use application, as this project focuses on providing tools and frameworks for AI model development and research rather than end-user solutions.
Stars
3,344
Forks
208
Language
Python
License
MIT
Category
Last pushed
Mar 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/EvolvingLMMs-Lab/Otter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model