Liquid AI Profile picture
Build capable and efficient general-purpose AI systems at every scale.
Jan 6 6 tweets 3 min read
Today, we release LFM2.5, our most capable family of tiny on-device foundation models.

It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the ~1B parameter class.

> LFM2.5 builds on our LFM2 device-optimized hybrid architecture
> Pretraining scaled from 10T → 28T tokens
> Expanded reinforcement learning post-training
> Higher ceilings for instruction following

🧵Image We release 5 open-weight model instances from a single architecture:

> General-purpose instruct model
> Japanese-optimized chat model
> Vision-language model
> Native audio-language model (speech in/out)
> Base checkpoints for deep customization Image
Oct 7, 2025 6 tweets 4 min read
Meet LFM2-8B-A1B, our first on-device Mixture-of-Experts (MoE)! 🐘

> LFM2-8B-A1B is the best on-device MoE in terms of both quality and speed.
> Performance of a 3B-4B model class, with up to 5x faster inference profile on CPUs and GPUs.
> Quantized variants fit comfortably on high-end phones, tablets, and laptops.

Enabling fast, private, low-latency applications across modern phones, tablets, laptops, and embedded systems.

1/n 🧵Image LFM2-8B-A1B has greater knowledge capacity than competitive models and is trained to provide quality inference across a variety of capabilities. Including:

> Knowledge
> Instruction following
> Mathematics
> Language translation

2/n Image
Jan 20, 2025 7 tweets 3 min read
Introducing LFM-7B, our new best-in-class language model in English, Arabic, and Japanese optimized to be the substrate for private enterprise chat, code, fast instruction following, and agentic workflows. 1/ Image In a series of head-to-head chat capability evaluations, done by 4 frontier LLMs as jury, LFM-7B shows dominance over other models in this size class. 2/ Image
Dec 2, 2024 7 tweets 3 min read
New Liquid research: STAR -- Evolutionary Synthesis of Tailored Architectures.

At Liquid we design foundation models with two macro-objectives: maximize quality and efficiency. Balancing the two is challenging. To make progress towards this goal, we built a new algorithm — STAR.

Read more about it here:
liquid.ai/research/autom… We first developed a new design theory for computational units of modern AI systems. We then used it to devise an efficient encoding into architecture genomes, and applied evolutionary algorithms to discover hundreds of new architecture designs.Image
Sep 30, 2024 13 tweets 5 min read
Today we introduce Liquid Foundation Models (LFMs) to the world with the first series of our Language LFMs: A 1B, 3B, and a 40B model. (/n) Image LFM-1B performs well on public benchmarks in the 1B category, making it the new state-of-the-art model at this size. This is the first time a non-GPT architecture significantly outperforms transformer-based models.Image