Nano Banana Pro

blog.google

768 points by meetpateltech 9 hours ago


ceroxylon - 6 hours ago

Google has been stomping around like Godzilla this week, and this is the first time I decided to link my card to their AI studio.

I had seen people saying that they gave up and went to another platform because it was "impossible to pay". I thought this was strange, but after trying to get a working API key for the past half hour, I see what they mean.

Everything is set up, I see a message that says "You're using Paid API key [NanoBanano] as part of [NanoBanano]. All requests sent in this session will be charged." Go to prompt, and I get a "permission denied" error.

There is no point in having impressive models if you make it a chore for me to -give you my money-

vunderba - 4 hours ago

Alright results are in! I've re-run all my editing based adherence related prompts through Nano Banana Pro. NB Pro managed to successfully pass SHRDLU, the M&M Van Halen test (as verified independently by Simon), and the Scorpio street test - all of which the original NB failed.

  Model results
  1. Nano Banana Pro: 10 / 12
  2. Seedream4: 9 / 12
  3. Nano Banana: 7 / 12
  4. Qwen Image Edit: 6 / 12

https://genai-showdown.specr.net/image-editing

If you just want to see how NB and NB Pro compare against each other:

https://genai-showdown.specr.net/image-editing?models=nb,nbp

minimaxir - 8 hours ago

I...worked on the detailed Nano Banana prompt engineering analysis for months (https://news.ycombinator.com/item?id=45917875)...and...Google just...Google released a new version.

Nano Banana Pro should work with my gemimg package (https://github.com/minimaxir/gemimg) without pushing a new version by passing:

    g = GemImg(model="gemini-3-pro-image-preview")
I'll add the new output resolutions and other features ASAP. However, looking at the pricing (https://ai.google.dev/gemini-api/docs/pricing#standard_1), I'm definitely not changing the default model to Pro as $0.13 per 1k/2k output will make it a tougher sell.

EDIT: Something interesting in the docs: https://ai.google.dev/gemini-api/docs/image-generation#think...

> The model generates up to two interim images to test composition and logic. The last image within Thinking is also the final rendered image.

Maybe that's partially why the cost is higher: it's hard to tell if intermediate images are billed in addition to the output. However, this could cause an issue with the base gemimg and have it return an intermediate image instead of the final image depending on how the output is constructed, so will need to double-check.