Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud
github.com99 points by ikessler 12 hours ago
99 points by ikessler 12 hours ago
Gemma Gem is a Chrome extension that loads Google's Gemma 4 (2B) through WebGPU in an offscreen document and gives it tools to interact with any webpage: read content, take screenshots, click elements, type text, scroll, and run JavaScript.
You get a small chat overlay on every page. Ask it about the page and it (usually) figures out which tools to call. It has a thinking mode that shows chain-of-thought reasoning as it works.
It's a 2B model in a browser. It works for simple page questions and running JavaScript, but multi-step tool chains are unreliable and it sometimes ignores its tools entirely. The agent loop has zero external dependencies and can be extracted as a standalone library if anyone wants to experiment with it.
There's also the Prompt API, currently in Origin Trial, which supports this api surface for sites: https://developer.chrome.com/docs/ai/prompt-api I just checked the stats: I expect that at some point this will become a native web feature, but not anytime soon, since the model download is many multiples the size of the browser itself. Maybe at some point these APIs could use LLMs built into the OS, like we do for graphics drivers. FWIW - I did a real world experiment pitting the built in Gemini Nano vs a free equivalent from OpenRouter (server call) and the free+server side was better in literally every performance metric. That's not to say that the in browser isn't valuable for privacy+offline, just that the standard case currently is pretty rough. https://sendcheckit.com/blog/ai-powered-subject-line-alterna... That’s exactly where we’re headed. Architecturally it makes zero sense to spin up an LLM in every app's userspace. Since we have dedicated NPUs and GPUs now, we need a unified system-level orchestrator to balance inference queues across different programs - exactly how the OS handles access to the NIC or the audio stack. The browser should just be making an IPC call to the system instead of hauling its own heavy inference engine along for the ride The Summarizer API is already shipped, and any website can use it to quietly trigger a 2 GB download by simply calling It’s a neat idea, but giving a 2B model full JS execution privileges on a live page is a bit sketchy from a security standpoint. Plus, why tie inference to the browser lifecycle at all? If Chrome crashes or the tab gets discarded, your agent's state is just gone. A local background daemon with a "dumb" extension client seems way more predictable and robust fwiw > but giving a 2B model full JS execution privileges on a live page is a bit sketchy from a security standpoint. Every webpage I've ever visited has full JS execution privileges and I trust half of them less than an LLM Note that every webpage does not have full JS execution privileges on other parts of the web. There's indexed db, opfs, etc. Plenty of ways to store stuff in a browser that will survive your browser restarting. Background daemons don't work unless you install and start them yourself. That's a lot of installation friction. The whole point of a browser app is that you don't have to install stuff. And what you call sketchy is what billions of people default to every day when they use web applications. I would love to see someone build it as some kind of an SDK. App builders could use it as a local LLM plugin when dealing with data involving sensitive information. It's usually too much when an app asks someone to setup a local LLM but this I believe could solve that problem? It's not too hard to code together with an LLM. I've been playing with small embeddings models in browsers in the last weeks. You don't really need that much. The limitation is that these things are fairly limited and slow to begin with and they run slower in a browser even with webgpu. But you can do some cool stuff. Adding an LLM is just more of the same. If you want to see an example of this, https://querylight.tryformation.com/ is where I put my search library and demo. It does vector search in the browser. Which apps have you seen ask for someone to setup a local LLM? Can't recall having ever seen one I have this written a a project I will attempt to do in the future, I also call it "weapons grade unemployment" in the notes I was proposing to use granite but the principle still stands. You beat me to it. Not sure if I actually want this (pretty sure I don't) -- but very cool that such a thing is now possible... it would be awesome if a local model would be directly embeded to chrome and developer could query them. Anyone know if this is somehow possible without going through an extension?
avaer - 10 hours ago
Different use case but a similar approach. Model Name: v3Nano
Version: 2025.06.30.1229
Backend Type: GPU (highest quality)
Folder size: 4,072.13 MiB
michaelbuckbee - an hour ago
veunes - 6 hours ago
sheept - 7 hours ago
(requires user activation) Summarizer.create()
veunes - 6 hours ago
shawabawa3 - 4 hours ago
saagarjha - 2 hours ago
jillesvangurp - 5 hours ago
emregucerr - 9 hours ago
jillesvangurp - 5 hours ago
winstonp - 8 hours ago
dabrez - 4 hours ago
montroser - 8 hours ago
eric_khun - 6 hours ago