Ollama 0.4 is released with support for Meta's Llama 3.2 Vision models locally

ollama.com

135 points by BUFU 5 hours ago


Patrick_Devine - 4 hours ago

This was a pretty heavy lift for us to get out which was why it took a while. In addition to writing new image processing routines, a vision encoder, and doing cross attention, we also ended up re-architecting the way the models get run by the scheduler. We'll have a technical blog post soon about all the stuff that ended up changing.

o11c - 3 hours ago

Did they fix multiline editing yet? Any interactive input that wraps across 3+ lines seems to become off-by-one when editing (but fine if you only append?), and this will be only more common with long filenames being added. And triple-quote breaks editing entirely.

How does this address the security concern of filenames being detected and read when not wanted?

- 2 hours ago
[deleted]
inasring - 4 hours ago

Can it run the quantized models?

vasilipupkin - 4 hours ago

how likely is it to run on a reasonably new windows laptop?