Lucas Atkins Profile picture
cto @arcee_ai
Jan 27 12 tweets 4 min read
Today, we are releasing our first weights from Trinity-Large, our first frontier-scale model in the Trinity MoE family. American Made.

- Trinity-Large-Preview (instruct)
- Trinity-Large-Base (pretrain checkpoint)
- Trinity-Large-TrueBase (10T pre Instruct data/anneal) Trinity-Large-Base is a truly frontier-class foundation model, and is one of the proudest (and most difficult) achievements I've had the honor of helping produce in my professional career. Image
Dec 15, 2025 9 tweets 3 min read
Lots of releases this morning, so we thought we'd join the fun.

We have an excellent write-up from @chargoddard on how he used DistillKit to convert AFM-4.5B into a KDA hybrid. It's not production-ready, but it shows tremendous potential.

And DistillKit is finally ready. Image With the KDA conversion, we were able to maintain strong retrieval scores up to the point where we trained (32k sequence length).

This was on minimal compute and gives us a strong signal as we look towards our next Trinity architectures. Image
Dec 1, 2025 6 tweets 2 min read
Today, we are introducing Trinity, the start of an open-weight MoE family that businesses and developers can own.

Trinity-Mini (26B-A3B)
Trinity-Nano-Preview (6B-A1B)

Available Today on Huggingface. I'm experiencing a combination of extreme pride in my team and crippling exhaustion, so I'm struggling to put into words just how excited I am to have these models out. Especially Mini. Image