There is a private school in Austin, Texas charging up to $65,000 a year in tuition and marketing itself as the future of education. Alpha School replaced most traditional instruction with an AI-powered curriculum — an automated system that generates personalized lessons for each student. In theory, it sounds like the kind of genuine innovation worth covering. In practice, internal documents obtained by 404 Media tell a different story.
The school's own records describe its AI as sometimes generating lessons that do "more harm than good." Former employees say students are effectively guinea pigs for an unproven system. The developers, it turns out, built the AI partly by scraping content from other online courses without authorization.
Alpha School has received glowing coverage from major media and found favorable attention from the Trump administration as a model for education innovation. That framing deserves scrutiny.
I want to be precise about what's wrong here, because "AI in education" is not the problem. AI tools can genuinely help personalize learning, identify knowledge gaps, reduce teacher administrative burden, and make high-quality content more accessible. The technology itself is interesting and worth taking seriously.
The problem is deploying experimental AI systems on children without adequate testing, without transparent disclosure to families about the system's limitations, and with a business model that treats tuition-paying families as the R&D budget. A company acknowledging in internal documents that its product sometimes causes harm — and continuing to sell that product as a premium educational experience — is not innovation. It's negligence with a good marketing budget.
The $65,000 price tag is also significant context. This is not a school trying to democratize education or reach underserved communities. It's selling wealthy families on a vision of technological superiority. The fact that the product doesn't work as advertised is not a minor quality control issue; it's fraud-adjacent territory that warrants regulatory attention.
There's a broader pattern worth naming: the rush to attach "AI-powered" to products in regulated, high-stakes domains — healthcare, education, legal services — without the evidence base those domains require. The tech industry's impatience with established ethical frameworks doesn't make them obsolete. It makes the failures more dangerous. The technology is impressive. The question is whether anyone needs it deployed on children before it's actually ready.




