Students are using AI "humanizers" to make AI-generated text look more human, specifically to evade AI detection software. It's creating an arms race between AI detection tools and AI evasion tools, with students caught in the middle — and the irony is lost on no one.
This is what happens when institutions try to ban technology instead of adapting to it.
Here's the absurd reality: 43 humanizer tools generated 33.9 million combined website visits in October alone. Students aren't just using these occasionally — they're subscribing at $20 to $50 per month for the privilege of making their writing look less algorithmic.
But here's where it gets really dystopian: students are using AI humanizers for two distinct reasons. Some are using them to evade detection after using AI to write papers. But others — and this is the part that should alarm every educator — are using them because they didn't use AI but are terrified of being falsely accused.
Brittany Carr, a student at Liberty University, ran her own writing through Grammarly's AI detector repeatedly, modifying it until it registered as human-written. She eventually left the institution due to accumulated stress. Let that sink in: a student editing her own original work to pass a Turing test administered by software that's barely more accurate than a coin flip.
Erin Ramirez, a professor at Cal State Monterey Bay, captured the absurdity perfectly: "Students now are trying to prove that they're human, even though they might have never touched AI ever. We're just in a spiral that will never end."
The detection software itself is wildly inconsistent. Independent research shows GPTZero has strong AI-detection capabilities but limited reliability with human text. Turnitin demonstrates low false positives but misses over 25% of AI-generated content. Ramirez reported her own papers being flagged at 98% AI despite zero AI usage.
So we've built a system where the goal isn't learning — it's passing a test designed by algorithms that don't work.
Professors are drowning in this. Morgan Sanchez notes that proper verification requires uncompensated time, especially for instructors managing hundreds of students. So they rely on detection software that generates false positives, which leads to students using humanizers, which leads to better detection software, which leads to better humanizers.
It's an arms race, and the casualty is education itself.
Here's what bothers me most: we're asking the wrong question. The question isn't "how do we detect AI-generated writing?" The question is "why are students using AI to write papers in the first place?"
Maybe it's because we've been assigning the same five-paragraph essay format for decades. Maybe it's because we're testing recall rather than understanding. Maybe it's because we've built an educational system optimized for industrial-era skills in a post-industrial world.
AI isn't the problem. AI is exposing the problem. We've been training students to perform tasks that are now trivially easy to automate. And instead of rethinking what education should look like in an AI-enabled world, we're buying software to detect whether students are using the same tools they'll use in every job they ever have.
The problem isn't the students. It's that we're still pretending AI doesn't exist.
