ikun-llm/ikun-V
多模态视觉语言模型 | Vision-Language Model 👁️
14
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
1 / 25
Community
0 / 25
Stars
—
Forks
—
Language
—
License
—
Category
Last pushed
Mar 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ikun-llm/ikun-V"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
56
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
54
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
54
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
51
fixie-ai/ultravox
A fast multimodal LLM for real-time voice
51