Google's Nano Banana 2 image generator is returning HTTP 200 success codes while delivering no images at all, and charging developers full price for the computational work anyway. The culprit is a hard-coded safety filter called Layer 2 IMAGE_SAFETY that cannot be disabled, even when developers explicitly request no content restrictions. At scale, this silent failure mode costs thousands of dollars monthly in wasted API charges. Why Is Google Charging You for Images That Don't Exist? The confusion starts with an unusual design decision by Google. When Nano Banana 2 (gemini-3.1-flash-image-preview) blocks your image request due to safety concerns, it doesn't return an error code like 4xx that would trigger standard error handling. Instead, it returns HTTP 200—the universal "success" code—while quietly including a finishReason field in the response body that says "IMAGE_SAFETY." Your code sees the 200 and assumes everything worked. You check your image variable. It's empty. And you've already been charged. The billing trap is where this becomes financially painful. Unlike rate-limit errors (429) or server errors (500 and 503), which Google doesn't bill, a 200 OK response with IMAGE_SAFETY means Google processed your entire prompt through its safety pipeline, determined it violated policy, blocked the output, and charged you for the computational work. At Nano Banana 2's pricing of approximately $0.067 per 1,000-pixel resolution image and $0.151 per 4,000-pixel resolution image, even a modest filter rate becomes expensive at scale. Consider a production application generating 10,000 images per day at 1,000-pixel resolution. If 20 percent of those trigger IMAGE_SAFETY, you're paying approximately $134 per day—or roughly $4,000 per month—for images you never received. At 4,000-pixel resolution with the same 20 percent failure rate, the waste climbs to approximately $302 per day, or over $9,000 per month. How Does Google's Two-Layer Safety System Actually Work? Nano Banana 2 uses a dual-layer safety architecture, and understanding the difference between them is critical for developers trying to fix this problem. Layer 1 is configurable and responds to safety settings in your API request. You can adjust thresholds for four harm categories: HARASSMENT, HATE_SPEECH, SEXUALLY_EXPLICIT, and DANGEROUS_CONTENT. Setting any category to BLOCK_NONE effectively disables blocking for that specific category at Layer 1. When Layer 1 blocks a request, the response includes finishReason: "SAFETY". Layer 2 is where the real problem lives. This layer contains hard-coded safety filters that Google maintains as non-negotiable policy enforcement. The four Layer 2 filters—IMAGE_SAFETY, PROHIBITED_CONTENT, CSAM (child sexual abuse material), and SPII (Sensitive Personally Identifiable Information)—operate as binary blockers with no configurable threshold. They cannot be disabled through any API parameter, including BLOCK_NONE. When Layer 2 intercepts your request, the response carries finishReason: "IMAGE_SAFETY" or finishReason: "PROHIBITED_CONTENT." The critical detail that most documentation buries is that these Layer 2 responses still return HTTP 200, creating the illusion of success for any code that only checks the status code. The practical implication is significant: if you have set BLOCK_NONE for all four Layer 1 categories and still get no image, you have not misconfigured anything. Your prompt simply triggered a Layer 2 filter that no configuration change can bypass. The only path forward is modifying your prompt. What Prompts Trigger the Invisible Safety Filter? Since Nano Banana 2 launched on February 27, 2026, Google has significantly tightened the Layer 2 filters compared to the original Nano Banana model. The most common triggers fall into six distinct categories, and understanding them is essential for building reliable applications: - Celebrity or Real-Person Faces: Even indirect references to public figures through descriptive phrasing often trigger the filter, making it difficult to generate images of recognizable people. - Suggestive or Revealing Clothing: Descriptions get caught even when the intent is clearly non-sexual, such as "a model at a fashion show" or "a swimmer at the beach." - Realistic Violence or Weapon Depictions: Broadly interpreted, catching military history illustrations and action movie scene recreations. - Real Currency or Financial Documents: Triggers consistently, even for clearly fictional or stylized versions. - Branded Content or Logo Recreation: Catches any prompt that references specific brand names or closely describes trademarked visual elements. - Anatomical or Medical Imagery: Gets blocked when requested in photorealistic style, though the same content often passes when framed as an educational diagram. The strictness has increased noticeably compared to the original Nano Banana model. Prompts that generated images successfully with the original model frequently trigger IMAGE_SAFETY on Nano Banana 2 without any changes to the prompt text. Community testing suggests that approximately 15 to 25 percent of prompts that worked on the original model now fail on Nano Banana 2, which is why many developers describe the model as "nerfed" in forum posts. How to Detect and Prevent Silent Billing Failures The fix is not a setting change—it is prompt engineering combined with proper response parsing. Here are the concrete steps developers should implement: - Parse the finishReason Field: Check the finishReason value in every API response. A value of "SAFETY" means Layer 1 blocked it (fixable through safetySettings). A value of "IMAGE_SAFETY" means Layer 2 caught it (you must rephrase your prompt). A value of "PROHIBITED_CONTENT" means your prompt violated core content policies and you should change the subject entirely. A value of "STOP" means the generation completed successfully and image data should be present. - Implement Retry Logic with Prompt Variations: When you detect an IMAGE_SAFETY block, automatically retry with a rephrased prompt that avoids the trigger categories documented above. This minimizes wasted spend by catching failures before they accumulate. - Rephrase Prompts to Avoid Trigger Categories: Remove references to celebrities, avoid suggestive clothing descriptions, use stylized rather than photorealistic framing for sensitive content, and describe branded items generically instead of by trademark name. The key insight is that your standard HTTP error handling will never catch the 200 OK no-image problem—you need to parse the response body to discover that your request was silently rejected. Without this check in place, you could be hemorrhaging money without realizing it. Is Nano Banana 2 Still Worth Using Despite These Issues? Despite the safety filter complications, Nano Banana 2 remains an attractive option for image generation at scale. The model generates images in 4 to 15 seconds at 1,000-pixel resolution and 10 to 56 seconds at 4,000-pixel resolution in real-world testing as of March 2026. At $0.045 to $0.151 per image depending on resolution, it ranks number one on AI Arena for text-to-image generation while costing roughly half what Nano Banana Pro charges. The quality story is straightforward and largely positive. At 1,000-pixel resolution, Nano Banana 2 produces images that are genuinely difficult to distinguish from Pro output in blind comparisons. Skin textures in portrait generations maintain natural detail, architectural scenes show clean lines and accurate perspective, and color reproduction stays vivid. The model achieves a CLIPScore of 0.319 plus or minus 0.006, confirming strong prompt adherence. Text rendering accuracy lands between 87 and 96 percent depending on complexity, which trails Pro's 94 to 96 percent consistency but significantly outperforms competing models. Where quality drops become noticeable is in fine detail at 4,000-pixel resolution. While Nano Banana 2 does generate true 4,000-pixel output, close inspection reveals that the finest details sometimes exhibit slight softness compared to Pro's 4,000-pixel output. This is most visible in text-heavy images where small font sizes may show minor artifacts, and in photorealistic scenes where hair strands and fabric textures don't quite reach the crispness that Pro delivers. For web-resolution use cases and social media, this difference is invisible. For large-format print where viewers will examine images at close range, Pro's quality advantage justifies its higher price. The most interesting finding from testing concerned consistency rather than peak quality. Nano Banana 2's output variability was slightly higher than Pro's, meaning that regenerating the same prompt multiple times produced a wider range of quality outcomes. About 1 in 10 generations at 4,000-pixel resolution would exhibit noticeable quality degradation, typically in the form of soft backgrounds or slightly muddy textures. Pro showed this behavior in roughly 1 in 20 generations. For developers building production applications, the key takeaway is this: Nano Banana 2 offers exceptional value and speed, but you must implement proper response parsing and retry logic to avoid the silent billing trap. The cost savings from using Nano Banana 2 instead of Pro can easily be erased by undetected IMAGE_SAFETY blocks. With the right safeguards in place, however, Nano Banana 2 becomes a genuinely cost-effective choice for high-volume image generation workflows.