Qwen3-TTS family is now open sourced: Voice design, clone, and generation

qwen.ai

466 points by Palmik 12 hours ago


simonw - 2 hours ago

I got this running on macOS using mlx-audio thanks to Prince Canuma: https://x.com/Prince_Canuma/status/2014453857019904423

Here's the script I'm using: https://github.com/simonw/tools/blob/main/python/q3_tts.py

You can try it with uv (downloads a 4.5GB model on first run) like this:

  uv run https://tools.simonwillison.net/python/q3_tts.py \
    'I am a pirate, give me your gold!' \
    -i 'gruff voice' -o pirate.wav
simonw - 8 hours ago

If you want to try out the voice cloning yourself you can do that an this Hugging Face demo: https://huggingface.co/spaces/Qwen/Qwen3-TTS - switch to the "Voice Clone" tab, paste in some example text and use the microphone option to record yourself reading that text - then paste in other text and have it generate a version of that read using your voice.

I shared a recording of audio I generated with that here: https://simonwillison.net/2026/Jan/22/qwen3-tts/

TheAceOfHearts - 7 hours ago

Interesting model, I've managed to get the 0.6B param model running on my old 1080 and I can generated 200 character chunks safely without going OOM, so I thought that making an audiobook of the Tao Te Ching would be a good test. Unfortunately each snippet varies drastically in quality: sometimes the speaker is clear and coherent, but other times it bursts out laughing or moaning. In a way it feels a bit like magical roulette, never being quite certain of what you're going to get. It does have a bit of charm, when you chain the various snippets together you really don't know what direction it's gonna go.

Using speaker Ryan seems to be the most consistent, I tried speaker Eric and it sounded like someone putting on a fake exaggerated Chinese accent to mock speakers.

If it wasn't for the unpredictable level of emotions from each chunk, I'd say this is easily the highest quality TTS model I've tried.

genewitch - 9 hours ago

it isn't often that tehcnology gives me chills, but this did it. I've used "AI" TTS tools since 2018 or so, and i thought the stuff from two years ago was about the best we were going to get. I don't know the size of these, i scrolled to the samples. I am going to get the models set up somewhere and test them out.

Now, maybe the results were cherrypicked. i know everyone else who has released one of these cherrypicks which to publish. However, this is the first time i've considered it plausible to use AI TTS to remaster old radioplays and the like, where a section of audio is unintelligible but can be deduced from context, like a tape glitch where someone says "HEY [...]LAR!" and it's an episode of Yours Truly, Johnny Dollar...

I have dozens of hours of audio of like Bob Bailey and people of that era.

throwaw12 - 10 hours ago

Qwen team, please please please, release something to outperform and surpass the coding abilities of Opus 4.5.

Although I like the model, I don't like the leadership of that company and how close it is, how divisive they're in terms of politics.

rahimnathwani - 8 hours ago

Has anyone successfully run this on a Mac? The installation instructions appear to assume an NVIDIA GPU (CUDA, FlashAttention), and I’m not sure whether it works with PyTorch’s Metal/MPS backend.

satvikpendem - 8 hours ago

This would be great for audiobooks, some of the current AI TTS still struggle.

girvo - 4 hours ago

Amusingly one of their examples (the final Age Control example) is prompted to have American English as an accent, but sounds like an Australian trying to sounds American to my ear haha

PunchyHamster - 8 hours ago

Looking forward for my grandma being scammed by one!

jakobdabo - 6 hours ago

Can anyone please provide directions/links to tools that can be run locally, and that take an audio recording of a voice as an input, and produce an output with the same voice saying the same thing with the same intonations, but with a fixed/changed accent?

This is needed for processing an indie game's voice recordings, where the voice actors weren't native speakers and had some accent.

gunalx - 3 hours ago

Voice actors are slo cooked. Some of the demos arguably sounded way better than a lot of indie voice-acting.

whinvik - 7 hours ago

Haha something that I want to try out. I have started using voice input more and more instead of typing and now I am on my second app and second TTS model, namely Handy and Parakeet V3.

Parakeet is pretty good, but there are times it struggles. Would be interesting to see how Qwen compares once Handy has it in.

thedangler - 9 hours ago

Kind of a noob, how would I implement this locally? How do I pass it audio to process. I'm assuming its in the API spec?

swaraj - 5 hours ago

Tried the voice clone with a 30s trump clip (with reference text), and it didn't sound like him at all.

dangoodmanUT - 3 hours ago

Many voices clone better than 11labs, while admitedly lower bitrate

JonChesterfield - 8 hours ago

I see a lot of references to `device_map="cuda:0"` but no cuda in the github repo, is the complete stack flash attention plus this python plus the weights file, or does one need vLLM running as well?

sails - 6 hours ago

Any recommendations for an iOS app to test models like this? There are a few good ones for text gen, and it’s a great way to try models

albertwang - 10 hours ago

great news, this looks great! is it just me, or do most of the english audio samples sound like anime voices?

indigodaddy - 9 hours ago

How does the cloning compare to pocket TTS?

ideashower - 9 hours ago

Huh. One of the English Voice Clone examples features Obama.

salzig - 7 hours ago

So now we're getting every movie in "original voice" but local language? Can't wait to view anime or Bollywood :D

wahnfrieden - 9 hours ago

How is it for Japanese?

lostmsu - 10 hours ago

I still don't know anyone who managed Qwen3-Omni to work properly on a local machine.