The AI lab behind GLM models, dedicated to inspiring the development of AGI to benefit humanity.
https://t.co/b6zGxJvzzS
Dec 8 • 6 tweets • 3 min read
GLM-4.6V Series is here🚀
- GLM-4.6V (106B): flagship vision-language model with 128K context
- GLM-4.6V-Flash (9B): ultra-fast, lightweight version for local and low-latency workloads
First-ever native Function Calling in the GLM vision model family
API Pricing (per 1M tokens):
- GLM-4.6V: $0.6 input / $0.9 output
- GLM-4.6V-Flash: Free
GLM-4.6V can accept multimodal inputs of various types and automatically generate high-quality, structured image-text interleaved content.
Aug 11 • 5 tweets • 3 min read
Introducing GLM-4.5V: a breakthrough in open-source visual reasoning
GLM-4.5V delivers state-of-the-art performance among open-source models in its size class, dominating across 41 benchmarks.
Built on the GLM-4.5-Air base model, GLM-4.5V inherits proven techniques from GLM-4.1V-Thinking while achieving effective scaling through a powerful 106B-parameter MoE architecture.