Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities.
Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.
You can try Hermes 4 in the new, revamped Nous Chat UI.
Nous Chat has been reworked to include parallel interactions, completions mode, and a memory system, which is slowly being rolled out. We now provide a suite of open and closed models for this experience, from Hermes 4 to GPT-5.
For the first week, all Hermes 4 inference in Nous Chat is free of charge.
Alongside these models, Nous Research releases a technical report that details the entirety of its creation process.
The technical report includes a thorough set of evaluations of Hermes 4 and other popular LLMs, complete with the actual text-results of each test. We believe this report sets a new standard for transparency in benchmarking.
In pursuit of producing models that are open, steerable and capable of producing the full range of human expression, we created a new benchmark, RefusalBench, that tests a model’s willingness to be helpful in a variety of scenarios commonly disallowed by both closed and open models.
Hermes 4 achieves SOTA against all popular closed and open models in conforming to your values, without censorship.
Special thanks to our launch day partners - @chutes_ai, @nebiusai, and @luminal_ai - for serving these models and powering our new chat experience.
Check out their platforms for additional options for API inference.
And if you’re looking for support or a great AI community to join, check out our discord at discord.gg/NousResearch
• • •
Missing some Tweet in this thread? You can try to
force a refresh
These new models are Hybrid Reasoners - meaning you can toggle ON and OFF the long chain of thought reasoning whenever you want a short, intuitive answer, or a long, well reasoned higher accuracy answer, now available on our API and to download on HuggingFace.
DeepHermes 24B Preview performs extremely well on reasoning tasks with reasoning mode ON, jumping over 4x in accuracy on hard math problems, and 43% on GPQA, a STEM based QA benchmark.
Built on @MistralAI's excellent Mistral-Small-24B open model, its a perfect size for quantization on consumer GPUs.
With reasoning mode off, it performs comparably to Mistral's own instruct variant.
The DeepHermes models scale quite well with size, with 3B->24B progressively and rapidly improving as you scale - and, its not just great at objective tasks, it's also great for any question that demands deep thought - and is completely transparent with its thinking process.
Introducing DeepHermes-3 Preview, a new LLM that unifies reasoning and intuitive language model capabilities.
DeepHermes 3 is built from the Hermes 3 datamix, with new reasoning data, creating a model that can toggle on and off long chains of thought for improved accuracy at the cost of more test time compute!huggingface.co/NousResearch/D…
This is our first work on reasoning models, and hope our unique approach to user controlled, toggleable reasoning mode furthers our mission of giving those who use DeepHermes more steerability for whatever need they have.
These early benchmarks show extreme improvement in Mathematical reasoning capabilities when enabled, as well as a modest improvement in GPQA (Google Proof Question Answering) benchmarks
Here are some example outputs in reasoning mode, where it thinks longer for harder problems and shows the full, raw chain of thought to arrive at the answer, allowing insight, transparency, observability, and access.
Recent AI breakthroughs challenge the status quo narrative that only closed, mega labs have the ability to push the frontier of superintelligence.
Today we announce Nous Psyche built on @Solana - a cooperative training network for generative AI. Psyche coordinates heterogeneous hardware to join a run and train open-source models.
We retell the myth of Psyche — a mortal’s quest for retribution against divine odds:
You can now experiment with Psyche’s DisTrO-enabled training code on our GitHub, and the larger open-sourced distributed training stack will be released alongside testnet.
Nous Research announces the pre-training of a 15B parameter language model over the internet, using Nous DisTrO and heterogeneous hardware contributed by our partners at @Oracle, @LambdaAPI, @NorthernDataGrp, @CrusoeCloud, and the Andromeda Cluster.
This run presents a loss curve and convergence rate that meets or exceeds centralized training.
Our paper and code on DeMo, the foundational research that led to Nous DisTrO, is now available (linked below).
We harness both Nous DisTrO, our novel networking stack that reduces inter-GPU communication by up to 10,000x during pretraining, and the testnet code for Psyche, a decentralized network that builds on Nous DisTrO to autonomously coordinate compute for model training and more.
Psyche details coming soon.
DeMo was created in March 2024 by Bowen Peng (@bloc97_ ) and Jeffrey Quesnelle (@theemozilla) and has been published on arXiv in collaboration with Diederik P. Kingma (@dpkingma), co-founder of OpenAI and inventor of the Adam optimizer and VAEs.
Today we are launching the Forge Reasoning API Beta, an advancement in inference time scaling that can be applied to any model or a set of models, for a select group of people in our community.
The Forge Reasoning engine is capable of dramatically improving Hermes 70B to reach parity in some categories with OpenAI's o1 (full), at the cost of more inference compute.
The API is built upon three architectures developed at Nous:
1. Monte Carlo Tree Search (MCTS) 2. Chain of Code (CoC) 3. Mixture of Agents (MoA)
Together, these three techniques create a powerful reasoning system that outputs complex, flexible, and nuanced responses from LLMs. Elevating open-source ai to the level of frontier models has been a core principle of Nous since its inception.
We’re inviting a small group of beta users to try out the Forge Reasoning API over the next month. This inference technology requires battle testing and user feedback in order to determine what areas it uniquely excels at in the real world.