Prime Intellect Profile picture
May 12 12 tweets 4 min read Read on X
Releasing INTELLECT-2: We’re open-sourcing the first 32B parameter model trained via globally distributed reinforcement learning:

• Detailed Technical Report
• INTELLECT-2 model checkpoint

primeintellect.ai/blog/intellect…
To train a model with reinforcement learning in a fully decentralized setting using community-contributed GPUs, we open-source several novel infrastructure components. Image
PRIME-RL: A fully asynchronous reinforcement learning framework designed for decentralized training. Decoupling of rollout generation, model training, and weight broadcasting enables training across heterogeneous, unreliable networks.
SHARDCAST: A library for distributing large files via a HTTP-based tree-topology network that efficiently propagates updated model weights from training nodes to the decentralized inference workers.Image
TOPLOC Validators: A validator service using TOPLOC proofs to ensure that rollouts from untrusted inference workers can be trusted for model training.Image
INTELLECT-2 is trained using rule-based rewards across math and coding problems and length rewards guiding the model to follow its thinking budget. We introduce modifications to the standard GRPO recipe to enhance training stability and encourage faster learning.

Two-step asynchronous RL: The broadcast of new policy weights is fully overlapped with ongoing inference and training, eliminating communication bottlenecks.Image
Two-Sided GRPO Clipping: Stabilizes training by mitigating gradient spikes with two-sided token probability ratio clipping.Image
Advanced Data Filtering: Combines offline and online filtering to select challenging tasks, significantly enhancing model learning efficiency.Image
Experiments:
We report results from two main experiments: TARGET-SHORT, an experimental run with short target lengths to train an efficient reasoning model, and, TARGET-LONG, our main run with longer target lengths.

Reward Trajectories:Image
Benchmark Performance:
We were able to increase the performance of QwQ-32B on math and coding benchmarks. Since QwQ-32B is already very strong and heavily trained using RL, huge additional improvements will likely require better base models or higher quality data.Image
INTELLECT-2 demonstrates that globally decentralized RL works.

Now, we’re focusing on tool-assisted reasoning, crowdsourcing higher-quality data, and optimizing our infrastructure and training recipe to build frontier open models.

Join us to build open source and decentralized AGI.
Links
• Detailed Technical Report: primeintellect.ai/intellect-2
• INTELLECT-2 on Hugging Face: huggingface.co/collections/Pr…
• Chat Interface to try it out: chat.primeintellect.ai
• Blog: primeintellect.ai/blog/intellect…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Prime Intellect

Prime Intellect Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @PrimeIntellect

Apr 15
Today we’re launching INTELLECT-2:

The first decentralized 32B-parameter RL training run open to join for anyone with compute — fully permissionless.

Scaling towards frontier reasoning across coding, math and science.
INTELLECT-2 brings decentralized training into the inference-time compute era:
• Fully async, decentralized reinforcement learning
• Eliminating communication overhead
• Scalable across heterogeneous GPUs worldwide

primeintellect.ai/blog/intellect…
Over the past months, we’ve built the full open-source stack to enable INTELLECT-2:
• PRIME-RL: fully async decentralized RL
• GENESYS & SYNTHETIC-1: crowdsourced tasks & verifiers for RL
• TOPLOC validation: verifiable inference with low overhead
• Protocol Testnet: global AI coordination infrastructureImage
Read 8 tweets
Feb 6
Introducing SYNTHETIC-1: Collaboratively generating the largest synthetic dataset of verified reasoning traces for math, coding and science using DeepSeek-R1.

Join us to contribute compute towards state-of-the-art open reasoning models.
Today, we release:
- SYNTHETIC-1: 1.4 million high-quality tasks & verifiers
- Public synthetic data run - allowing anyone to contribute compute
- GENESYS: open, extendable synthetic data generation framework + call for crowdsourcing tasks & verifiers

primeintellect.ai/blog/synthetic…
Our open reproduction & scaling of R1 will proceed in two steps, mirroring the DeepSeek-R1 approach:
1. Generate verified reasoning data & train SFT model on this cold-start data
2. Globally distributed reinforcement learning with verifiable rewardsImage
Read 10 tweets
Jan 28
Today, we release TOPLOC: A Locality Sensitive Hashing Scheme for Verifiable Inference

- Detects modifications to models, prompts, or precision
- Robust across GPU types, tensor parallel configurations and attention kernels
- Up to 100× faster validation than generation
- Reduces memory overhead of proofs by 1000×

primeintellect.ai/blog/toploc

Building the foundation for decentralized, verifiable compute protocols.Image
The Problem: Trust in LLM Inference

In a peer-to-peer setting, ensuring honest behavior among providers requires detecting and penalizing dishonest ones. Providers often make changes, such as:

- Lowering precision
- Compressing KVCache
- Altering model weights or prompts
TOPLOC encodes key features of the last hidden states into a compact, verifiable proof.

- Providers commit the top-k values of the last hidden states
- Verifiers use prefill to process commits, enabling much faster validation than the original generation Image
Read 9 tweets
Jan 6
Releasing METAGENE-1: In collaboration with researchers from USC, we're open-sourcing a state-of-the-art 7B parameter Metagenomic Foundation Model.

Enabling planetary-scale pathogen detection and reducing the risk of pandemics in the age of exponential biology.
METAGENE-1 is a 7B parameter autoregressive transformer model trained on over 1.5T DNA and RNA base pairs sequenced from wastewater samples.

Website: metagene.ai
Paper: metagene.ai/metagene-1-pap…
Github: github.com/metagene-ai/me…
Hugging Face: huggingface.co/metagene-ai Image
The dataset is sourced from a large collection of human wastewater samples, processed and sequenced using deep metagenomic (next-generation) sequencing methods.

After pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection.Image
Read 6 tweets
Nov 29, 2024
Releasing INTELLECT-1: We’re open-sourcing the first decentralized trained 10B model:

- INTELLECT-1 base model & intermediate checkpoints
- Pre-training dataset
- Post-trained instruct models by @arcee_ai
- PRIME training framework
- Technical paper with all details
This represents a 10× scale-up from our previous work and demonstrates that large-scale model training is no longer confined to large corporations but can be achieved through distributed, community-driven approaches.

Technical report: github.com/PrimeIntellect…

Blogpost: primeintellect.ai/blog/intellect…
Try out Intellect-1 on chat.primeintellect.ai
Read 16 tweets
Apr 23, 2024
Introducing Prime Intellect – democratizing AI development at scale, from compute to intelligence.

We're excited to announce our $5.5M raise from @DistributedG @coinfund_io @CompoundVC @Collab_Currency @protocollabs @ClementDelangue @dylan522p and others

primeintellect.ai/blog/introduci…Image
Our vision
Build infrastructure to aggregate compute, develop distributed training frameworks, and create a protocol for decentralized AI development—enabling anyone to contribute resources, collectively train open models, and share in their ownership.
Our masterplan
1. Aggregate global compute (live in beta)
2. Enable globally distributed training across clusters
3. Collaboratively train open AI models in high-impact domains
4. Create a decentralized protocol for collective AI model ownershipImage
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(