Mac owners don't miss this: MLX LM is now integrated directly within Hugging Face 🤯
⬇️ Run 4,400+ LLMs locally on Apple Silicon at max speed, no cloud, no wait.
You need to toggle MLX LM in your local apps settings: huggingface.co/settings/local…
Mar 7 • 8 tweets • 3 min read
QwQ-32B changed local AI coding forever 🤯
We now have SOTA performance at home. Sharing my stack + tips ⬇️
All tools we are going to use are free, open source, and cross-platform (Windows/Mac/Linux): LM Studio, Aider and Hugging Face.
Feb 17 • 5 tweets • 2 min read
Forget GPT wrappers: 2025 is the year of hyper-specialized small reasoning models.
Someone made a UI reasoning model, it's a SFT fine-tuning of Qwen-7B trained on 500 samples. For landing page generation results are already on par with most closed models (it's a 7B...)