zai-org/GLM-V
GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
2,266 stars. Actively maintained with 14 commits in the last 30 days.
Stars
2,266
Forks
160
Language
Python
License
Apache-2.0
Last pushed
Apr 06, 2026
Commits (30d)
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/multimodal/zai-org/GLM-V"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
starVLA/starVLA
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
vortex-data/vortex
An extensible, state-of-the-art framework for columnar compression, and the fastest FOSS...
motis-project/motis
multimodal routing, geocoding, and map tiles
neka-nat/cad3dify
2D to 3D CAD Conversion Using VLM
batmanlab/Mammo-CLIP
[MICCAI 2024, top 11%] Official Pytorch implementation of Mammo-CLIP: A Vision Language...