making ascii cline follow your mouse was a lot of fun but turned out to be way more complicated than I thought since 1) models are notoriously bad at ascii art, and 2) terminals were built for text and don't have native mouse support.
here's how i did it! 🧵
1) I turned the cline logo into a video using Google Flow and their Veo 3.1 model.
I prompted it to make the logo look 3D and move its head from left to right. (lol at the random robot sounds it added)
2) then I used by @CameronFoxly to convert the video into 192 frames of ascii. Each frame is cline looking at a different position on screen, so mouse at left edge = frame 0 (eyes looking left), and at right edge = frame 191 (eyes looking right).ascii-motion.app
3) the terminal lets CLIs capture either ALL mouse events or none, so this meant to track movement we also had to handle scrolling/clicking/etc too.
when you move your mouse, the terminal sends escape sequences to stdin like \x1b[<35;46;17M where we parse x=46 and y=17.
try it out for yourself in our new cli update!
npm i -g cline
cline is an open source coding agent used by over 5m developers, now just as fun and powerful in the terminal as its been in the ide. github.com/cline/cline
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Seriously blown away by Moonshot's new Kimi K2 model in @cline. It beats Claude Opus 4 on coding benchmarks and is up to 90% cheaper. It was clearly built to excel at plan -> act, iteratively improve code, and complex tool use instructions all making it the perfect match for how Cline was built. (Almost feels like it was trained for Cline since they have a page on their site about it: platform.moonshot.ai/docs/guide/age…)
Here's my takeaways from the K2 research papers 🧵
It's built on the same open source MoE architecture that powers DeepSeek-V3 but with 50% more parameters. MoE (Mixture of Experts) is a technique that breaks down large models into a gating network ("manager") and smaller specialized networks ("experts"). Your input is given to the manager to route to the appropriate expert, allowing for quicker more accurate results than if you used a single "generalist" for all problems. Kimi K2 trained its network of experts on code generation, agentic tool use, and math/sciences.
But Moonshot had 2 key insights that helped them go a step above. They cite Ilya Sutskever that human data is a finite "fossil fuel", and that LLMs should instead learn from their own self-generated interactions, freeing them from the limits of human data in order to surpass human capabilities. And so to enhance Kimi's agentic capabilities, they trained it on large amounts of synthetic data simulating real-world tool use scenarios (including MCP tools!) and used an RL system that uses a self-judging mechanism where the model acts as its own critic on coding tasks.