Google appears to be gearing up for its next leap in AI image generation with the upcoming release of Nano Banana 2 — the follow-up to its viral Nano Banana model. Leaked previews and early developer references reveal a smarter, self-correcting image system that could make Google’s Gemini ecosystem a serious creative powerhouse.
According to screenshots and code snippets shared on X (formerly Twitter), Nano Banana 2 will be fully integrated into the Gemini app, continuing Google’s expansion of its AI image tools across platforms.
Early testers report that Nano Banana 2 can handle angle and viewpoint control with greater precision, producing cleaner compositions and sharper geometry than its predecessor. Color accuracy also appears improved, with more natural gradients and lighting transitions. Users will be able to edit embedded text or signage seamlessly, something rival AI systems still struggle with.
This iterative process, insiders say, mimics the way a human designer sketches, critiques, and improves their work, effectively making the model a “creative partner” rather than a one-shot generator.
Leaked preview images reveal cleaner lines, smoother shadows, and fewer of the signature visual quirks associated with AI imagery. Figures are proportionally balanced, faces look less uncanny, and reflective surfaces appear physically consistent with the lighting context.
Behind its playful name, Nano Banana 2 marks a deeper shift in Google’s approach to AI-driven design. It treats image generation more like a structured design process, one that thinks, checks, and refines rather than improvises.
This proactive self-correction could also reduce the number of “AI hallucinations,” those unwanted artifacts or surreal distortions that often appear in generated visuals. In effect, the model is learning to judge its own output, an emerging hallmark of next-generation reasoning AI systems.
References to “Nano Banana Pro” have begun appearing in internal GitHub commits, hinting at a premium-tier version of the model with expanded capabilities, likely aimed at professional designers or high-resolution rendering tasks.
Internally, the model is referred to as GEMPIX 2, part of Google’s ongoing Gemini 2.5 Flash ecosystem, which already powers fast image synthesis in apps like Whisk Labs and Google AI Studio.
Testers report that GEMPIX 2 has already started appearing in experimental tools and image enhancement pipelines across Google’s ecosystem, suggesting the tech giant is preparing for a broad rollout similar to how the original Nano Banana appeared suddenly across Search, Photos, and creative services earlier this year.