NVIDIA GPUs or modern CPU instructions like AVX or AVX2 if available.
No virtualization required.
Feb 2, 2024 • 7 tweets • 4 min read
Ollama vision is here.
Welcome to the era of open-source multimodal models.
In this new Ollama release (v0.1.23), we’ve made improvements to how Ollama handles multimodal models. We hope you’ll like it as much as we do!
It works across the CLI, and Python & JavaScript libraries.
In the CLI? Prompt + Drag and drop your image in.
Jan 27, 2024 • 6 tweets • 3 min read
Friday fun with models!
ollama run stablelm2
Stable LM 2 from @StabilityAI, a 1.6 billion parameter small language model trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch.
Thanks @EMostaque and team for the model! It now runs on Ollama.