This is one of the most interesting ideas on reasoning I've read in the past couple of months.
It uses a recurrent architecture for impressive hierarchical reasoning.
Here are my notes:
The paper proposes a novel, brain-inspired architecture that replaces CoT prompting with a recurrent model designed for deep, latent computation.
It moves away from token-level reasoning by using two coupled modules: a slow, high-level planner and a fast, low-level executor.
The two recurrent networks operate at different timescales to collaboratively solve tasks
Leads to greater reasoning depth and efficiency with only 27M parameters and no pretraining!
Despite its small size and minimal training data (~1k examples), HRM solves complex tasks like ARC, Sudoku-Extreme, and 30×30 maze navigation, where CoT-based LLMs fail.
HRM introduces hierarchical convergence, where the low-level module rapidly converges within each cycle, and the high-level module updates only after this local equilibrium is reached.
This enables nested computation and avoids premature convergence typical of standard RNNs.
A 1-step gradient approximation sidesteps memory-intensive backpropagation-through-time (BPTT).
This enables efficient training using only local gradient updates, grounded in deep equilibrium models.
HRM implements adaptive computation time using a Q-learning-based halting mechanism, dynamically allocating compute based on task complexity.
This allows the model to “think fast or slow” and scale at inference time without retraining.
Experiments on ARC-AGI, Sudoku-Extreme, and Maze-Hard show that HRM significantly outperforms larger models using CoT or direct prediction, even solving problems that other models fail entirely (e.g., 74.5% on Maze-Hard vs. 0% for others).
Analysis reveals that HRM learns a dimensionality hierarchy similar to the cortex: the high-level module operates in a higher-dimensional space than the low-level one (PR: 89.95 vs. 30.22).
The authors suggest that this is an emergent trait not present in untrained models.
If you want to learn about the Agentic Web, look no further.
This new report is a banger!
It presents a detailed framework to understand and build the agentic web.
Here is everything you need to know:
Agentic Web
This paper introduces the concept of the Agentic Web, a transformative vision of the internet where autonomous AI agents, powered by LLMs, act on behalf of users to plan, coordinate, and execute tasks.
It proposes a structured framework for understanding this shift, situating it as a successor to the PC and Mobile Web eras.
It's defined by a triplet of core dimensions (intelligence, interaction, and economics) and involves fundamental architectural and commercial transitions.
Introduces a novel RAG framework that moves beyond traditional one-shot or chunk-based retrieval by integrating graph-structured knowledge, agentic multi-turn interaction, and RL.
Graph-R1 is an agent that reasons over a knowledge hypergraph environment by iteratively issuing queries and retrieving subgraphs using a multi-step “think-retrieve-rethink-generate” loop.
Unlike prior GraphRAG systems that perform fixed retrieval, Graph-R1 dynamically explores the graph based on evolving agent state.
> MoE Architecture
> Hybrid reasoning models
> 355B total (32B active)
> GQA + partial RoPE
> Multi-Token Prediction
> Muon Optimizer + QK-Norm
> 22T-token training corpus
> Slime RL Infrastructure
> Native tool use
Here's all you need to know:
Model Architecture & Pre-Training
GLM-4.5 is 355B total parameters (32B active); deeper model with narrower width; optimized for reasoning via more layers and 96 attention heads.
GLM-4.5-Air is 106B (12B active).
22T-token training corpus that combines 15T general data with 7T code/reasoning-focused data.
Grouped-Query Attention + partial RoPE to enhance long-context efficiency and accuracy in reasoning tasks.
Mid-training looks like a key part of this model
"Unlike the earlier pre-training stage on large-scale universal documents, these stages leverage medium-sized domain-specific datasets, including instruction data."
Great title for a report, but even better insights about how increasing input tokens impact the performance of top LLMs.
Banger report from Chroma.
Here are my takeaways (relevant for AI devs):
Context Rot
The research evaluates how state-of-the-art LLMs perform as input context length increases, challenging the common assumption that longer contexts are uniformly handled.
Testing 18 top models (including GPT-4.1, Claude 4, Gemini 2.5, Qwen3), the authors show that model reliability degrades non-uniformly even on simple tasks as input grows, what they term "context rot."
Simple tasks reveal degradation
Even basic benchmarks like semantic variants of Needle-in-a-Haystack, repeated word copying, or long QA logs (LongMemEval) expose accuracy drops as context length increases.
The decline is more dramatic for semantically ambiguous inputs or outputs that scale with length.
160+ pages covering the most important research around context engineering for LLMs.
This is a must-read!
Here are my notes:
The paper provides a taxonomy of context engineering in LLMs categorized into foundational components, system implementations, evaluation methodologies, and future directions.
The context engineering evolution timeline from 2020 to 2025 involves foundational RAG systems to complex multi-agent architectures.