Universities across the United States are abandoning traditional written assignments in favor of oral exams and face-to-face assessments, responding to a troubling trend: students submitting flawless AI-generated work they cannot actually explain. As generative AI tools like ChatGPT become increasingly sophisticated, educators are witnessing a crisis in higher education where take-home essays arrive polished and error-free, yet students blank when asked to defend their ideas in person. The shift represents a fundamental rethinking of how colleges verify what students actually know. Professors report that when students are asked to explain their work verbally, they often cannot, suggesting they may not have engaged in the critical thinking required to truly understand the material. This gap between perfect written submissions and blank stares during questioning has prompted institutions from Cornell University to the University of Pennsylvania to implement structured oral assessments. What's Driving Universities to Abandon Written Exams? The rise of generative AI has exposed a fundamental weakness in traditional assessment methods. Educators are no longer naively wondering whether students will use AI to complete their homework; they are now focused on determining what students are genuinely learning. Chris Schaffer, a biomedical engineering professor at Cornell University, introduced what he calls an "oral defense" last semester, requiring students to speak directly with instructors about their work. The concern extends beyond academic integrity. Professors worry that students increasingly view the hard work of thinking as optional, outsourcing cognitive effort to AI tools. This trend threatens to undermine the development of critical thinking skills that students need for upper-level coursework and professional careers. Emily Hammer, an associate professor of Middle Eastern Languages and Cultures at the University of Pennsylvania, explained her reasoning for pairing oral exams with written assignments. "We're doing this because students are actually losing skills, losing cognitive capacity and creativity," said Emily Hammer. Emily Hammer, Associate Professor of Middle Eastern Languages and Cultures at the University of Pennsylvania Hammer's perspective reflects a broader institutional shift. The University of Pennsylvania has begun running faculty workshops on oral exams and is experiencing what Bruce Lenthall, executive director of the school's Center for Teaching and Learning, describes as "a massive shift toward in-person assessments," both written and oral. How Are Universities Implementing Oral Assessment Strategies? - Socratic-Style Questioning: Cornell's Schaffer requires students to sign up for 20-minute sessions of Socratic-style questioning after submitting written problem sets, with teaching assistants conducting the oral defenses instead of grading papers. - AI-Powered Chatbot Exams: NYU professor Panos Ipeirotis unveiled an AI-powered oral exam using a voice-cloned chatbot that asks students questions about their work, provides feedback, and adapts based on responses, allowing students to take exams remotely on their own schedule. - In-Person Office Hours and Presentations: NYU faculty are increasingly requiring office hours, assigning presentations, and cold-calling students in class to verify understanding, with instructors saying they need to "look students in the eye and ask, 'Do you know this material?'" - Final Conversations: Cornell's religious studies department replaced traditional final exams with 30-minute "final conversations" between students and faculty. - Mock Interviews: Engineering courses are implementing four-minute mock interviews with each student, even in large classes of 180 people. The diversity of approaches reflects institutions experimenting to find what works best. Some universities are combining old-fashioned methods with cutting-edge technology. Panos Ipeirotis, who designed the AI-powered oral exam at NYU's Stern School of Business, describes his approach as "fighting fire with fire". "We wanted to check: Do you know what your team did? Were you a free rider? Did you outsource everything to AI?" said Panos Ipeirotis. Panos Ipeirotis, Professor at NYU's Stern School of Business Ipeirotis designed the tool with ElevenLabs, a company that develops generative AI voice agents for job interviews. The chatbot greets students with a cloned voice, asks for identification, then drills into details based on each student's answers. If a student struggles, the AI provides clues and feedback. Ipeirotis grades the exams separately, also with AI assistance. Students in his class this semester are redesigning the AI agent to address initial issues, and Ipeirotis plans to use it in all his future classes. Why Are Oral Exams Making a Comeback? Oral exams are not new; they are as old as Socrates and remain standard in certain European universities, particularly in the Oxbridge tutorial system in England where students meet faculty for weekly discussions. However, they have been largely absent from modern American undergraduate education. The COVID-19 pandemic sparked initial interest in oral assessments to address concerns about online cheating, but interest has intensified dramatically since ChatGPT's launch in 2022. During the pandemic, engineering professor Huihui Qi launched a three-year study at the University of California, San Diego on how to scale oral exams. Several universities have since invited her to provide faculty workshops or discuss her research, indicating growing institutional interest in making oral assessments practical for large classes. The appeal is straightforward: students cannot easily fake understanding in real-time conversation. As Schaffer notes, "You won't be able to AI your way through an oral exam." This creates an incentive for students to actually engage with the material rather than relying on generative AI tools. As Schaffer At NYU, Clay Shirky, vice provost for AI and technology in education, observed that instructors are increasingly saying, "I need to look my students in the eye and ask, 'Do you know this material?'" This sentiment reflects a broader recognition that traditional written assessments no longer reliably measure student learning in an age of sophisticated AI. What Are the Practical Challenges and Student Reactions? Despite the promise of oral exams, implementation presents real challenges. Ipeirotis's AI-powered chatbot received mixed feedback from students. Business major Andrea Liu found the chatbot's voice surprisingly human, but the conversation felt choppy with odd pauses. The chatbot sometimes asked multiple questions at once, creating confusion. Additionally, students found it jarring to hear a voice without seeing a person. "It felt kind of awkward to be talking to what was pretty much a blank screen," Liu said. However, she acknowledged the underlying concern: "There is no perfect world where AI exists and kids are not abusing it". Despite these technical limitations, educators see benefits even for shy students. Oral assessments can provide a more personalized evaluation of understanding than standardized written tests. The shift also reflects a recognition that the problem is not AI itself, but rather how institutions measure learning in an AI-enabled world. The long-term impact of widespread AI use on critical thinking remains uncertain, but educators are not waiting for studies to confirm their concerns. Across humanities and STEM disciplines, from computer science to biomedical engineering, professors are implementing oral assessments as a practical solution to verify that students have genuinely engaged with course material.