Prime Intellect Profile picture
Feb 11 13 tweets 5 min read Read on X
Introducing Lab: A full-stack platform for training your own agentic models

Build, evaluate and train on your own environments at scale without managing the underlying infrastructure.

Giving everyone their own frontier AI lab.
We are not inspired by a future where a few labs control the intelligence layer

So we built a platform to give everyone access to the tools of the frontier lab

If you are an AI company, you can now be your own AI lab

If you are an AI engineer, you can now be an AI researcher
Lab unifies everything you need for post-training research into one platform

+ Environments Hub
+ Hosted Evaluations
+ Hosted Training
+ Deployments & Inference

Without needing to worry about the costs of massive GPU clusters or the headaches of low-level algorithm details Image
Lab is built around environments, which include:

+ A dataset of tasks
+ A harness for the model
+ A rubric to score performance

Use environments to train models with RL, evaluate capabilities, generate synthetic data, optimize prompts, experiment with agent harnesses and more. Image
Just run `prime lab setup` and start your coding agent to set up your own AI lab. Image
Hosted Training

Create your environment, configure your training run, and we handle the rest.

No worrying about managing infrastructure, GPUs, or low-level algorithms.

We’re launching with agentic RL, and adding support for SFT and other algorithms in the near future. Image
Image
Image
Hosted Evaluations

+ Run evals using our hosted inference, sandboxes, and more
+ Visualize results and raw outputs
+ Share results on the Environments Hub Image
Image
Image
Beyond our own INTELLECT-3 model, Lab lets you run reinforcement learning on a wide range of open models.

From Nvidia, Arcee, Hugging Face, Allen AI, Z AI, Qwen, and many more launching soon.

We’re also launching with experimental multimodality support. Image
Deployments & Inference

Large-scale production deployments of your fine-tuned models on shared hardware

Built to evolve towards a future of continual learning, where models learn in production as training and inference collapse into a single loop. Image
Infrastructure

Lab is built on the same stack we used to train INTELLECT-3.

Each run gets a dedicated orchestrator, with multi-tenant LoRA for training and inference.

Enabling shared hardware across runs, high efficiency, per-token pricing.

Over the past few weeks in private beta, more than 3,000 RL runs were completed by individuals and companies from around the world.

Starting today, we’re opening it up to everyone. Image
Get started with your first training run

docs.primeintellect.ai/hosted-trainin…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Prime Intellect

Prime Intellect Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @PrimeIntellect

Jan 27
We're excited to introduce @arcee_ai's Trinity Large model.

An open 400B parameter Mixture of Experts model, delivering frontier-level performance with only 13B active parameters.

Trained in collaboration between Arcee, Datology and Prime Intellect.
Trinity Architecture

Key design choices:
- Interleaved local + global attention (3:1 pattern)
- Grouped-query + gated attention
- New load-balancing method (SMEBU)
- Depth scaled sandwich norm and QK norm

With extreme sparsity, built for long context and fast inference.Image
Infrastructure

- Large-scale synthetic data generation on ~2k H100s
- Training Trinity Large on 2k B300 GPUs

Training stack:
- Modified torchtitan
- Muon optimizer
- HSDP with FSDP group size 128
- Expert parallelism
- Context parallelism for context extension
- Improvements to recover quickly from hardware failures
Read 7 tweets
Jan 1
We believe the next breakthrough in long-horizon agents is training models to manage their own context.

Introducing our new research direction on Recursive Language Models.

We are sharing our initial experiments showing the promise of RLMs.

primeintellect.ai/blog/rlm
First introduced by @a1zhang in Oct 2025, the RLM has access to its inputs through a variable in a persistent Python REPL.

The model can inspect & transform that variable with code, and pipe parts of it into sub-LLMs with tools without ever loading the potentially huge input data into its context.Image
RLMs are a simple, flexible form of context folding that doesn't depend on lossy summarization.

Instead, the model proactively delegates context to:

- Python scripts (search, filter, transform)
- Sub-LLMs (fresh instances) for parallel work
- Iterative answer edits until it's actually correct
Read 8 tweets
Nov 27, 2025
Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack

Achieving state-of-the-art performance for its size across math, code and reasoning

Built using the same tools we put in your hands, from environments & evals, RL frameworks, sandboxes & more
INTELLECT-3 is a 106B parameter Mixture-of-Experts model trained with both SFT and RL on top of the GLM 4.5 Air Base model.

Both stages, including multiple ablations, were carried out on a 512-GPU H200 cluster over the course of two months. Image
Our Training Stack

+ PRIME-RL: Our scalable, asynchronous RL trainer
+ Verifiers: Our unified library used for hundreds of envs and evals on the Environments Hub
+ Sandboxes: Custom container infra optimized for agentic RL
+ Compute: Orchestration & observability for 512 H200s
Read 13 tweets
Oct 27, 2025
We're scaling our Open-Source Environments Program

As part of this, we're committing hundreds of thousands of $ in bounties and looking for partners who want to join our mission to accelerate open superintelligence

Join us in building the global hub for environments and evals
Over the past 2 months, we've crowdsourced 400+ environments and 80+ verified implementations through our bounties and RL residency across:

+ Autonomous AI Research
+ Browser Automation
+ Theorem Proving
+ Subject-Specific QA
+ Legal/Finance Tasks
+ Many more...
Thank you to everyone whose claimed a bounty or joined the residency!

@alexinexxx @xlr8harder @LatentLich @myainotez @ChaseBrowe32432 @varunneal @vyomdundigalla @amit05prakash @minjunesh @sidbing @unrelated333 @ljt019 @lakshyaag @sid_899 @srthkdev @semiozz @ibnAmjid and more! Image
Read 6 tweets
Sep 25, 2025
Another week, another hundred environments.

From autonomous AI research, MCP integrations, and browser automation to domain specific environments for economically valuable tasks across law, finance, and tax. Image
NanoGPT Speedrun

Evaluate code-generation and pretraining capabilities of LLMs via NanoGPT Speedrun benchmark.

By @leloykun
app.primeintellect.ai/dashboard/envi…
MLE-Bench

Environment for solving Kaggle ML competitions from MLE-bench.

By @creet_z
app.primeintellect.ai/dashboard/envi…
Read 25 tweets
Sep 15, 2025
Today we're launching Reserved Instances

- Request 8–1,000+ GPU clusters
- Get quotes from up to 50+ providers in 24h
- Re-sell idle GPUs back to our spot market
- Support from our research team Image
Expanding our Compute Exchange

- Find the best and most cost-effective reserved instance offers across 50+ providers
- Re-sell idle GPUs from your reserved cluster on our liquid compute market
- H100s, H200s, B200s, and NVL72 clusters available today
Additional Features

- Orchestration with SLURM, Ray or Kubernetes
- Monitoring with Grafana dashboards
- Native integrations into our full-stack infra offering: Environment Hub, Sandboxes, Reinforcement Fine-Tuning, Multi-Node Training
- Dedicated support from our research team
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(