Trinity large: An open 400B sparse MoE model

arcee.ai

154 points by linolevan a day ago


mynti - a day ago

They trained it in 33 days for ~20m (that includes apparently not only the infrastructure but also the salaries over a 6 month period). And the model is coming close to QWEN and Deepseek. Pretty impressive

linolevan - a day ago

I'm particularly excited to see a "true base" model to do research off of (https://huggingface.co/arcee-ai/Trinity-Large-TrueBase).

tcdent - 5 hours ago

It's super exciting to see another American lab get in the ring. Even if they're not at SOTA on the first release, the fact that they're trying is incredible for open source AI.

Alifatisk - 7 hours ago

What did they do to make the loss drop so much in phase 3?

Also, why are they comparing with Llama 4 Maverick? Wasn’t it a flop?

mwcampbell - 8 hours ago

Given that it's a 400B-parameter model, but it's a sparse MoE model with 13B active parameters per token, would it run well on an NVIDIA DGX Spark with 128 GB of unified RAM, or do you practically need to hold the full model in RAM even with sparse MoE?

kristianp - 3 hours ago

There's a free preview on openrouter: https://openrouter.ai/arcee-ai/trinity-large-preview:free

greggh - 8 hours ago

The only thing I question is the use of Maverick in their comparison charts. That's like comparing a pile of rocks to an LLM.

frogperson - 8 hours ago

What exactly does "open" mean in this case? Is it weights and data or just weights?

fuddle - 6 hours ago

> We optimize for performance per parameter and release weights under Apache-2.0

How do they plan to monetize?

syntaxing - 6 hours ago

So refreshing to see open source models like this come from the US. I would love for a 100Bish size one that can compete against OSS-120B and GLM air 4.5

khimaros - 4 hours ago

unsloth quants are up https://huggingface.co/unsloth/Trinity-Large-Preview-GGUF

observationist - 12 hours ago

This is a wonderful release.

LoganDark - 4 hours ago

According to the article, nearly 50% of the dataset is synthetic (8T out of 17T tokens). I don't know what constitutes "a breadth of state-of-the-art rephrasing approaches", but I lack some confidence in models trained on LLM output, so I hope it wasn't that.

0xdeadbeefbabe - 6 hours ago

Is anyone excited to do ablative testing on it?