Arcee Trinity Mini: US-Trained Moe Model

arcee.ai

68 points by hurrycane a day ago


halJordan - a day ago

Looks like a less good version of qwen 30b3a which makes sense bc it is slightly smaller. If they can keep that effiency going into the large one it'll be sick.

Trinity Large [will be] a 420B parameter model with 13B active parameters. Just perfect for a large Ram pool @ q4.

Balinares - 11 hours ago

Interesting. Always glad to see more open weight models.

I do appreciate that they openly acknowledge the areas where they followed DeepSeek's research. I wouldn't consider that a given for a US company.

Anyone tried these as a coding model yet?

davidsainez - 20 hours ago

Excited to put this through its paces. It seems most directly comparable to GPT-OSS-20B. Comparing their numbers on the Together API: Trinity Mini is slightly less expensive ($0.045/$0.15 v $0.05/$0.20) and seems to have better latency and throughput numbers.

htrp - a day ago

Trinity Nano Preview: 6B parameter MoE (1B active, ~800M non-embedding), 56 layers, 128 experts with 8 active per token

Trinity Mini: 26B parameter MoE (3B active), fully post-trained reasoning model

They did pretraining on their own and are still training the large version on 2048 B300 GPUs

ksynwa - 20 hours ago

> Trinity Large is currently training on 2048 B300 GPUs and will arrive in January 2026.

How long does the training take?

trvz - 19 hours ago

Moe ≠ MoE

bitwize - a day ago

A moe model you say? How kawaii is it? uwu