Confer – End to end encrypted AI chat

confer.to

59 points by vednig 8 hours ago


Signal creator Moxie Marlinspike wants to do for AI what he did for messaging - https://arstechnica.com/security/2026/01/signal-creator-moxi...

Private Inference: https://confer.to/blog/2026/01/private-inference/

shawnz - 2 hours ago

I don't agree that this is end to end encrypted. For example, a compromise of the TEE would mean your data is exposed. In a truly end to end encrypted system, I wouldn't expect a server side compromise to be able to expose my data.

This is similar to the weasely language Google is now using with the Magic Cue feature ever since Android 16 QPR 1. When it launched, it was local only -- now it's local and in the cloud "with attestation". I don't like this trend and I don't think I'll be using such products

jeroenhd - 6 hours ago

An interesting take on the AI model. I'm not sure what their business model is like, as collecting training data is the one thing that free AI users "pay" in return for services, but at least this chat model seems honest.

Using remote attestation in the browser to attest the server rather than the client is refreshing.

Using passkeys to encrypt data does limit browser/hardware combinations, though. My Firefox+Bitwarden setup doesn't work with this, unfortunately. Firefox on Android also seems to be broken, but Chrome on Android works well at least.

datadrivenangel - 6 hours ago

Get a fun error message on debian 13 with firefox v140:

"This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.)."

JohnFen - 7 hours ago

Unless I misunderstand, this doesn't seem to address what I consider to be the largest privacy risk: the information you're providing to the LLM itself. Is there even a solution to that problem?

I mean, e2ee is great and welcome, of course. That's a wonderful thing. But I need more.

throwaway35636 - 33 minutes ago

Interestingly the confer image on GitHub doesn’t seem to include in the attestation the model weights (they seem loaded from a mounted ext4 disk without dm-verity). Probably this doesn’t compromise the privacy of the communication (as long as the model format is not containing any executable part) but it exposes users to a “model swapping” attack, where the confer operator makes a user talk to an “evil” model without they can notice it. Such evil model may be fine tuned to provide some specifically crafted output to the user. Authenticating the model seems important, maybe it is done at another level of the stack?

paxys - 2 hours ago

"trusted execution environment" != end-to-end encryption

The entire point of E2EE is that both "ends" need to be fully under your control.

orbital-decay - an hour ago

At least Cocoon and similar services relying on TEE don't call this end-to-end encryption. Hardware DRM is not E2EE, it's security by obscurity. Not to say it doesn't work, but it doesn't provide mathematically strong guarantees either.

jdthedisciple - an hour ago

The best private LLM is the one you host yourself.

slipheen - an hour ago

Does it say anywhere which model it’s using?

I see references to vLLM in the GitHub but not which actual model (Llama, Mistral, etc.) or if they have a custom fine tune, or you give your own huggingface link?

hiimkeks - 2 hours ago

I am confused. I get E2EE chat with a TEE, but the TEEs I know of (admittedly not an expert) are not powerful enough to do the actual inference, at least not any useful one. The blog posts published so far just glance over that.

letmetweakit - 2 hours ago

How does inference work with a TEE, isn’t performance a lot more restricted?

AdmiralAsshat - 8 hours ago

Well, if anyone could do it properly, Moxie certainly has the track record.

f_allwein - 7 hours ago

Interesting! I wonder a) how much of an issue this addresses, ie how much are people worried about privacy when they use other LLMs? and b) how much of a disadvantage it is for Confer not to be able to read/ train in user data.

LordDragonfang - 2 hours ago

> Advanced Passkey Features Required

> This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.

> Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.).

(Running Chrome 143)

So... does this just not support desktops without overpriced webcams, or am I missing something?

jeroadhd - 2 hours ago

Again with the confidential VM and remote attestation crypto theater? Moxie has a good track record in general, and yet he seems to have a huge blindspot in trusting Intel broken "trusted VM" computing for some inexplicable reason. He designed the user backups of Signal messages to server with similar crypto secure "enclave" snake-oil.