The Los Angeles Times is reporting something that should terrify every college student who thought AI would make school easier: professors are bringing back oral exams.
Not as a supplement to written work. As a replacement for it.
The trend is accelerating across universities as faculty confront a reality they can no longer ignore: they cannot tell which essays were written by students and which were written by ChatGPT. AI detection tools don't work reliably. Plagiarism checkers are useless. Even sophisticated analysis of writing style fails when students are smart enough to edit the AI output.
So they're doing what educators did for literally thousands of years before the written exam was invented: they're asking students to explain their thinking out loud, in real-time, without the ability to outsource to a chatbot.
The results are revealing, and not in a good way for the students.
Professors report a consistent pattern: students turn in perfect essays with sophisticated argumentation and flawless prose, then sit in oral examination unable to articulate basic concepts from those same essays. The disconnect is so obvious that some faculty have stopped even pretending the written work represents student learning.
One professor quoted in the Times described it as "perfect homework, blank stares" - students who can submit A+ written work but can't answer basic follow-up questions about their own arguments.
This is the AI education crisis that everyone saw coming but nobody wanted to deal with. We've spent two years debating whether students should be allowed to use AI, while the reality is they already are, at scale, and there's no practical way to stop them.
Oral exams are one solution, and honestly not a terrible one. They test actual understanding rather than ability to prompt ChatGPT. They force students to develop communication skills that matter in the real world. They're harder to fake.
But they don't scale. A professor with 200 students can grade 200 essays in a weekend. They cannot conduct 200 individual oral exams. So we're going to see bigger class sizes shrink, more teaching assistants doing examination work, or universities completely rethinking how they assess learning.
The Reddit education community is split on this, with over 220 upvotes and passionate debate in the comments. Some students argue oral exams are more stressful and disadvantage non-native English speakers. Others point out that if you actually learned the material, explaining it out loud shouldn't be that hard.
Here's my take, as someone who both teaches occasionally and has hired dozens of people: oral communication is a more useful skill than essay writing for most careers. If colleges are moving toward assessment methods that test whether students can think on their feet and articulate complex ideas clearly, that's probably better preparation for the real world than churning out five-paragraph essays.
But we should be honest about what's happening: this isn't innovation in pedagogy, it's retreat. We're admitting that a core assessment method used for generations no longer works because the technology has advanced faster than our ability to adapt.
The deeper question is what happens when AI gets good enough to pass oral exams too. Real-time voice AI is already surprisingly capable. Give it another few years of development and we'll have students with earpieces getting fed answers during examinations.
Then what? In-person, device-free testing environments? Faraday cages around exam rooms? We can't assessment-technology our way out of this forever.
At some point, education has to confront the fundamental question: if AI can do the work we're assigning, should we be assigning different work?
The technology is impressive. The question is whether colleges are ready to completely rethink what they're teaching and why.





