The Ethics Gap Nobody's Talking About: Why AI in Criminal Justice Remains a Black Box
When an algorithm decides whether you stay in prison or go free, you deserve to understand why. Yet in Catalonia, Spain, a risk assessment system called RisCanvi makes exactly those decisions while keeping its logic hidden from inmates, judges, and even prison staff. The system illustrates a troubling paradox: AI tools designed to reduce bias and increase fairness have instead created new forms of opacity that undermine due process and human accountability.
How Did a Transparent System Become Opaque?
RisCanvi started with good intentions. Introduced in 2009 by the Catalan prison administration, the system was originally built on expert-defined rules and transparent weighting schemes. Prison officials designed it to standardize risk predictions, reduce discretionary errors by individual judges, and enhance transparency in decisions about parole, temporary leave, and program assignments.
Then, in 2019, everything changed. The system shifted from a simple weighted-sum model that anyone could understand to a logistic regression model, a more complex mathematical approach embedded deep within the prison database. Prison administrators justified the change by pointing to internal audits showing better predictive performance. But the trade-off was stark: the new model learned its parameters from historical data rather than having them set by human experts, and those parameters remain inaccessible to most users.
Today, key details about variables, thresholds, and decision logic are hidden from prison staff, legal professionals, and the inmates themselves. Most inmates don't even know they're being evaluated by an algorithm. This creates what researchers call "institutional opacity," a deep asymmetry where technical developers retain privileged access while everyone else interacts with the system through reduced and often opaque interfaces.
Why Should Anyone Care About a Spanish Prison System?
RisCanvi matters because it's not unique. Criminal justice systems worldwide are deploying algorithmic risk assessment tools to inform decisions about incarceration, conditional release, rehabilitation, and reintegration. These systems promise objectivity and consistency, but they often deliver the opposite: opacity that undermines fairness and due process.
The European Union recognized this danger explicitly. The EU AI Act, which took effect recently, classifies criminal risk assessment systems as high-risk applications requiring binding safeguards. International standardization bodies like ISO and IEC have developed complementary frameworks, such as ISO/IEC 42,001, to help translate legal principles into auditable practices. Yet despite these regulatory efforts, the frameworks remain generic and require real-world adaptation to be effective in complex, partially opaque legacy systems like RisCanvi.
The stakes couldn't be higher. When individual liberty is at stake, algorithmic opacity becomes more than a technical issue; it becomes a structural injustice that undermines autonomy, fairness, and the legitimacy of public decision-making.
Steps to Restore Transparency and Accountability in High-Risk AI Systems
- Algorithmic Explainability: Deploy techniques like SHAP-based feature attribution to reveal how specific data points influence individual outcomes, making the model's logic auditable and contestable by stakeholders.
- Human-Centered Interface Design: Create role-sensitive explanation interfaces tailored to different users, such as inmates, judges, and prison staff, so each group can understand the system's logic at an appropriate level of detail.
- Stakeholder-Aligned Narrative Communication: Develop ethically grounded communication strategies that explain algorithmic decisions in plain language, ensuring affected individuals can meaningfully understand and question the logic shaping their trajectories.
Researchers propose a three-layer governance and explainability framework for high-risk AI systems like RisCanvi. This approach combines technical transparency, human-centered design, and stakeholder-aligned accountability, drawing on the Trustworthy AI paradigm and aligning with the EU AI Act and ISO/IEC 42,001 standards.
What Does Ethical AI Look Like in Higher Education?
The opacity problem extends beyond criminal justice. In higher education, a different but equally important ethical challenge is emerging around AI-generated content (AIGC). A large-scale study of 1,642 Chinese university students reveals how students balance ethical concerns with practical motivations when deciding whether to use AI tools for learning.
The research, published in April 2026, examined six key factors influencing student acceptance of AIGC. These factors fell into two categories: AI ethical cognition, which includes concerns about academic norms, privacy risks, and algorithmic fairness; and AI usage motivation, which encompasses knowledge-based learning, instrumental benefits, and social entertainment.
The findings were surprising. While concerns about academic norms were negatively associated with AIGC acceptance, meaning students who valued academic integrity were less likely to use AI tools, the other five factors showed significant positive relationships with acceptance. This suggests that students weigh ethical concerns against practical benefits, and that ethical education alone may not be enough to guide responsible AI adoption.
The study contributes to expanding understanding of ethical factors in technology acceptance research and explains more complex usage scenarios of generative AI. Practically, the findings provide empirical insights into ethical and privacy education initiatives, bias detection practices, learning community building, interdisciplinary collaborative projects, and the design of teaching activities related to AIGC acceptance in higher education contexts.
What's the Common Thread Between These Two Stories?
Whether in criminal justice or higher education, the core issue is the same: AI systems are being deployed in high-stakes contexts without adequate transparency, accountability, or stakeholder input. In criminal justice, opacity undermines due process and human dignity. In education, it creates ethical gray zones where students must navigate competing values without clear guidance.
The solution isn't to reject AI in these domains. Rather, it's to demand that AI systems be designed, deployed, and governed with transparency and accountability at their core. This means making algorithmic logic understandable to affected individuals, creating mechanisms for contestation and appeal, and ensuring that human oversight remains meaningful rather than ceremonial.
As AI becomes more embedded in consequential decisions, the question is no longer whether we can build fair algorithms. It's whether we can build systems that are fair, transparent, and accountable to the people whose lives they affect. RisCanvi and the broader landscape of AI in criminal justice suggest we have much work to do.