Show HN: Multimodal perception system for real-time conversation

raven.tavuslabs.org

41 points by mert_gerdan 8 hours ago


I work on real-time voice/video AI at Tavus and for the past few years, I’ve mostly focused on how machines respond in a conversation.

One thing that’s always bothered me is that almost all conversational systems still reduce everything to transcripts, and throw away a ton of signals that need to be used downstream. Some existing emotion understanding models try to analyze and classify into small sets of arbitrary boxes, but they either aren’t fast / rich enough to do this with conviction in real-time.

So I built a multimodal perception system which gives us a way to encode visual and audio conversational signals and have them translated into natural language by aligning a small LLM on these signals, such that the agent can "see" and "hear" you, and that you can interface with it via an OpenAI compatible tool schema in a live conversation.

It outputs short natural language descriptions of what’s going on in the interaction - things like uncertainty building, sarcasm, disengagement, or even shift in attention of a single turn in a convo.

Some quick specs:

- Runs in real-time per conversation

- Processing at ~15fps video + overlapping audio alongside the conversation

- Handles nuanced emotions, whispers vs shouts

- Trained on synthetic + internal convo data

Happy to answer questions or go deeper on architecture/tradeoffs

More details here: https://www.tavus.io/post/raven-1-bringing-emotional-intelli...

arctic-true - 3 hours ago

This is super interesting. But I have to wonder how much it costs on the back end - it sounds like it’s essentially just running a boatload of specialized agents, constantly, throughout the whole interaction (and with super-token-rich input for each). Neat for a demo, but what would it cost to run this for a 30 minute job interview? Or a 7 hour deposition?

Another concern I’d have is bias. If I am prone to speaking loudly, is it going to say I’m shrill? If my camera is not aligned well, is it going to say I’m not making eye contact?

edbaskerville - 3 hours ago

Old Macs in the background. Electronic soundtrack reminscent of Chariots of Fire, which played during the Mac intro.

ycombiredd - 4 hours ago

Hmm.. My first thought is that great, now not only will e.g., HR/screening/hiring hand-off the reading/discerning tasks to an ML model, they'll now outsource the things that require any sort of emotional understanding (compassion, stress, anxiety, social awkwardness, etc) to a model too.

One part of me has a tendency to think "good, take some subjectivity away from a human with poor social skills", but another part of me is repulsed by the concept because we see how otherwise capable humans will defer to "expertise" of an LLM due to a notion of perceived "expertise" in the machine, or laziness (see recent kerfuffles in the legal field over hallucinated citations, etc.)

Objective classification in CV is one thing, but subjective identification (psychology, pseudoscientific forensic sociology, etc) via a multi-modal model triggers a sort of danger warning in me as initial reaction.

Neat work, though, from a technical standpoint.

ashishheda - 5 hours ago

Wonder how it works?

jesserowe - 8 hours ago

the demo is wild... kudos

Johnny_Bonk - 4 hours ago

Holy

- 7 hours ago
[deleted]