How a Penn Professor Completed a Master's Course in 12 Hours Using AI,and What It Means for Universities

A University of Pennsylvania economics professor has completed what would normally take a week of graduate study in just 12 hours, using Anthropic's Claude AI as a personalized tutor and course designer. The experiment reveals a fundamental challenge facing higher education: if students can access comparable learning experiences for $20 per month through AI, what justifies the cost of traditional university programs?

Jesús Fernández-Villaverde, an economics professor at Penn, designed a personalized course on sociologist Erving Goffman using Claude, completed the assigned readings himself, and engaged the AI as an interactive learning partner. The process took roughly 12 hours of total study time, matching what a well-structured master's program would require over a full week .

"The ability to learn at this level, at close to zero marginal cost, is extraordinary," Fernández-Villaverde told Business Insider. He described the experience as "the difference between reading a book alone and reading it alongside a knowledgeable colleague who has infinite patience and no office hours" .

What Makes AI Tutoring So Effective at Course Design?

The professor's three-stage learning process highlights where AI excels in personalized education. First, Claude generated a tailored syllabus based on his existing knowledge, including readings, key themes, and connections to other thinkers. Second, Fernández-Villaverde completed the actual reading himself, but Claude had selected and sequenced the materials. Third, he used the AI as an interactive partner, asking for clarifications, connections, and parallels to concepts from economics .

What impressed Fernández-Villaverde most was Claude's ability to curate what to study, in what order, and what to emphasize, given a student's particular background. "Claude did this at a level that I would say exceeds the 90th percentile of real professors, at least for this kind of task," he explained. He noted that curriculum design is "one of the hardest things a professor does," yet the AI handled it with remarkable sophistication .

However, the AI fell short in critical ways. It doesn't challenge students the way great teachers do. "It answers the questions you ask rather than the questions you should be asking," Fernández-Villaverde observed. The AI also cannot replicate the peer experience of a classroom community. Yet he emphasized an important caveat: "One should compare Claude not to the ideal professor but to the real one," acknowledging that human professors also make mistakes and don't always answer student questions precisely or correctly .

How Are Universities Responding to This Challenge?

Fernández-Villaverde sees AI as a major positive for learning but a serious challenge for certain institutions. The implications are primarily economic: AI is forcing universities to justify what students are paying for. University programs built primarily around delivering lectures are especially exposed to disruption .

"If your main value proposition is transmitting existing knowledge in the classroom, and a student can get a comparable or better version of that for $20 a month, the business model is under severe pressure," he stated. However, he acknowledged that this doesn't apply equally across all institutions. Top universities still offer advantages AI cannot easily replicate .

The institutions most vulnerable to disruption share common characteristics. Those that will thrive are offering something AI cannot provide, including research mentorship, laboratory access, genuine peer communities, and credible credentials. "The ones that are essentially selling access to lectures and a diploma will face the hardest questions," Fernández-Villaverde warned .

Steps Universities Are Taking to Adapt to AI

  • Redesigning Assessments: Top universities like Cornell and the University of Pennsylvania are reintroducing oral exams where students must explain their reasoning face-to-face with instructors, ensuring they actually understand the material rather than relying on AI outputs .
  • Shifting Focus to Research and Mentorship: Universities are emphasizing proximity to the research frontier, hands-on laboratory access, and direct mentorship from faculty as core value propositions that AI cannot replicate .
  • Developing Clear AI Use Policies: Institutions are rapidly developing policies governing which tools are permitted in specific courses, disclosure requirements, and assignment redesign to make AI assistance irrelevant to core assessments .

Beyond higher education, schools at all levels are grappling with similar questions about AI's role. Some districts like New York City Public Schools have reversed earlier bans on ChatGPT and launched AI initiatives, while others are implementing guarded tools like Khanmigo that guide students step-by-step rather than providing direct answers. This reflects a broader shift from asking "Should we use AI?" to asking "How do we use AI responsibly?" .

The Penn professor's experiment suggests that the future of education won't be about eliminating AI, but rather about institutions proving they offer something beyond knowledge transmission. For universities built on lecture-based models, the pressure to transform is now urgent. For those with strong research programs, engaged faculty mentorship, and vibrant peer communities, AI may actually enhance their value by freeing up time for deeper intellectual engagement .