Universities are taking a direct role in shaping how AI gets governed by training graduate students to think critically about fairness, accountability, and safety before they enter the workforce. Two major initiatives launched in 2026 signal a shift in how higher education institutions are approaching AI regulation, moving beyond abstract discussions to fund concrete research on AI's real-world impacts in education systems and vulnerable communities. Why Are Universities Suddenly Focused on AI Justice and Ethics? The Penn State College of Education launched the AI Justice Fellows initiative in spring 2026, selecting nine doctoral students for the first cohort to conduct research on how artificial intelligence is reshaping teaching, learning, and academic work. This program reflects a growing recognition that educators and scholars need deep expertise in AI governance, not just technical knowledge. The initiative funds graduate research examining the social and educational implications of emerging AI technologies, with fellows receiving financial support to advance their work while participating in a collaborative learning community. Leah P. Hollis, associate dean for access, equity and inclusion at Penn State, explained the program's philosophy: "Our responsibility is not simply to adopt AI. It is to ask how we integrate it in ways that are responsible, accountable and aligned with our educational mission." This framing positions universities as institutions that should actively shape AI policy rather than passively implement tools developed elsewhere. Similarly, UNESCO hosted the "AI & I: Shaping a Safer Digital Caribbean" workshop in Kingston, Jamaica on March 9, 2026, bringing together more than 50 participants from government institutions, youth organizations, and other stakeholders to address how AI affects women's safety and equality online. The workshop focused on technology-facilitated gender-based violence, or TFGBV, which includes harassment, impersonation, and image-based abuse that disproportionately harms women and girls. What Research Topics Are Graduate Students Actually Exploring? The Penn State cohort represents a diverse range of research questions that go beyond typical AI policy discussions. The nine fellows are examining topics that reveal how AI intersects with equity, identity, and power in educational settings: - Policy Analysis: Ghadir Al Saghir is using AI-assisted methods to examine equity-focused education policies across Pennsylvania's 499 school districts, integrating machine learning with traditional qualitative analysis. - Indigenous Knowledge and Data Sovereignty: Mekdes Abera is exploring how Ethiopian Orthodox Tewahedo knowledge systems can inform ethical AI development and protect indigenous data rights in graduate education. - Student Voice in AI Governance: Nicole Espinoza is developing participatory methods for doctoral students to help create AI guidelines for their own programs, centering student agency in policy-making. - Multilingual Student Experiences: Suyoung Park is examining how AI-generated feedback treats multilingual graduate students differently, investigating whether AI systems amplify or reduce linguistic bias. - International Student Surveillance: Asis Wayhudi is analyzing how AI-mediated academic work affects international graduate students, exploring the tension between supportive tools and surveillance mechanisms. - Epistemic Authority: Yi "Eve" Wu is investigating how generative AI affects who gets to sound "scholarly," examining linguistic justice and authority in academic writing. These projects reveal a critical insight: AI governance is not just a technical or legal problem. It is fundamentally about power, identity, and whose voices get heard in educational systems. How to Build AI Governance Capacity in Your Institution Both Penn State and UNESCO demonstrate practical approaches that other institutions can adapt to strengthen AI governance and ethics training: - Structured Funding for Graduate Research: Provide financial support and protected time for doctoral students to conduct rigorous research on AI's social and educational implications, creating a pipeline of scholars who understand both AI systems and their real-world impacts. - Collaborative Learning Communities: Bring together graduate students from different disciplines and programs to share research findings, methodologies, and ethical frameworks, strengthening individual projects through peer learning and cross-disciplinary perspectives. - Hands-On Capacity Development Workshops: Offer participatory, practical training sessions for government officials, educators, youth leaders, and community stakeholders that combine knowledge-building with critical reflection on how to apply AI ethics principles in their own contexts. - Leadership Development in AI Ethics: Ensure that faculty leading these initiatives complete professional certifications and ongoing professional development in AI governance, so they can model rigorous engagement with emerging technologies. - Focus on Marginalized Communities: Deliberately center research and training on how AI affects vulnerable populations, including women, multilingual learners, international students, and indigenous communities, rather than treating equity as an afterthought. What Specific Risks Are These Programs Addressing? The UNESCO workshop highlighted a critical concern: generative AI can amplify existing inequalities rather than solve them. Technology-facilitated gender-based violence, including deepfakes, non-consensual content, and misinformation, poses real risks to women and girls online. The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica, captured this challenge: "Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them." This observation underscores why training the next generation of scholars and policymakers matters. If AI systems are built without input from people who understand education, gender dynamics, indigenous knowledge systems, and multilingual communication, those systems will embed biases and harms that are difficult to reverse. Yi "Eve" Wu, a second-year doctoral student in Penn State's College of Education and AI Justice Fellow, explained the urgency: "As AI tools become more widely used in teaching, research and academic work, colleges of education have an important role in helping future educators and scholars understand how to use these technologies responsibly." This perspective reflects a broader shift in how universities are approaching AI governance, moving from passive adoption to active, informed engagement. The Penn State and UNESCO initiatives suggest that meaningful AI regulation does not happen only in government offices or corporate boardrooms. It happens in classrooms where the next generation of educators, policymakers, and researchers learn to ask hard questions about fairness, accountability, and responsibility before AI systems are deployed at scale.