A new governance framework suggests that complete transparency in AI systems may be neither feasible nor necessary for building trust. Instead of treating opacity as a design flaw to eliminate, researchers propose managing it ethically through role-sensitive explanations tailored to different stakeholders. This shift moves the focus from universal transparency to institutional accountability and context-aware governance. Why the Push for Total AI Transparency Might Be Backfiring? For years, the dominant narrative in AI ethics has centered on the "black box" problem. The thinking goes: if we could see exactly how an AI system makes decisions, we could ensure it's fair, safe, and trustworthy. But this assumption has a fundamental flaw. Full transparency can actually mislead non-experts by overwhelming them with uninterpretable data, expose sensitive intellectual property, or create a false sense of fairness while concealing deeper systemic risks. Researchers Francisco Herrera and Reyes Calderón introduced the LoBOX (Lack of Belief: Opacity and eXplainability) ethics governance framework to address this paradox. Rather than fighting opacity, the framework acknowledges that in complex AI systems powered by deep learning, internal operations may be technically accessible but remain cognitively opaque even to domain experts. The key insight: opacity is a condition to be ethically governed, not eliminated. What Does Role-Sensitive Explainability Actually Mean? The LoBOX framework integrates what researchers call the RED/BLUE XAI model, which aligns explanation strategies with stakeholder roles. This approach recognizes that different people need different types of explanations. A clinical professional reviewing an AI diagnostic tool needs explanations that support decision-making and responsibility attribution, while a patient using the same tool prioritizes explanations that foster reassurance, fairness perceptions, and procedural understanding. Research in human-computer interaction demonstrates this divergence clearly. When researchers studied stakeholder needs within a single high-risk domain, they found that even within the same field, different groups had significantly different expectations. Rather than forcing everyone through a single explanation strategy, the solution is deliberate design of stakeholder-tailored interfaces. How to Implement Ethical AI Governance Under Opacity - Reduce Accidental Opacity: Eliminate unnecessary obscurity in AI systems by improving documentation, clarifying decision pathways, and removing technical barriers that don't serve a purpose. - Bound Irreducible Opacity: Acknowledge that some complexity cannot be fully explained and establish clear institutional boundaries around what opacity is acceptable and why. - Delegate Trust Through Institutional Oversight: Build accountability structures where institutions take responsibility for AI decisions, rather than expecting individual users to understand every technical detail. The LoBOX framework proposes a three-stage governance pathway that moves beyond the assumption that transparency solves everything. First, organizations should reduce accidental opacity, eliminating unnecessary obscurity. Second, they should bound irreducible opacity, acknowledging that some complexity cannot be fully explained and establishing clear institutional boundaries. Third, they should delegate trust through institutional oversight, creating accountability structures where institutions take responsibility for AI decisions. This approach aligns with emerging legal instruments like the EU AI Act, which recognizes that different stakeholders have different rights and needs regarding AI explanations. The framework is designed to remain aligned with evolving technological contexts and stakeholder expectations while ethically governing opacity. How Does This Change What "Trustworthy AI" Actually Means? The traditional view treats trust as something that flows from transparency. More light equals more trust. But the LoBOX framework reframes trust as an outcome of institutional credibility, structured justification, and stakeholder-sensitive accountability. In other words, you trust an AI system not because you can see every line of code, but because you trust the institution deploying it, understand the reasoning behind its decisions, and know who is accountable if something goes wrong. This distinction matters practically. Consider a hospital using AI to help diagnose cancer. A patient doesn't need to understand the neural network architecture; they need to know that doctors reviewed the recommendation, that the hospital has quality controls in place, and that someone is responsible if the AI makes a mistake. A regulatory auditor, by contrast, needs technical transparency to verify the system meets safety standards. The same AI system can be trustworthy for both stakeholders through different explanation strategies, not through universal transparency. The research also highlights a psychological dimension often overlooked in transparency debates. Explanations can be behaviorally and psychologically counterproductive, including promoting excessive reliance on model output. This means that providing more explanation isn't always better; it depends on whether that explanation supports appropriate human judgment or undermines it. As AI systems become more prevalent in high-stakes decisions, the shift from transparency-centric approaches to governance-centric ones may prove essential. Rather than pursuing an impossible ideal of universal explainability, organizations can build trustworthy AI by tailoring explanations to stakeholder roles, establishing clear institutional accountability, and ethically managing the opacity that is both inevitable and sometimes necessary.