Disney's internal trial of OpenAI's Sora video generation tool was apparently a disaster, according to new reporting from 404 Media. While Sora's public demos looked genuinely impressive — realistic video from text prompts, coherent motion, decent physics — Disney's experience suggests the gap between demo and production-ready tool remains vast.
I've seen this exact movie play out in every tech sector. Impressive demo, breathless media coverage, companies rushing to adopt, then the quiet realization that demo ≠ product. The gap between "works in a controlled environment" and "works in production at scale" is where startups die and enterprise pilots become cautionary tales.
What went wrong
According to sources familiar with Disney's trial, Sora struggled with basically everything required for professional video production:
• Consistency: Characters and objects changed appearance between shots. You can't edit a scene together when the protagonist's shirt changes color mid-sequence.
• Control: Directors need frame-level control over composition, lighting, and camera movement. Sora generates video probabilistically — you can steer it, but you can't precisely control it.
• Iteration: Professional workflows involve hundreds of iterative tweaks. Sora treats each generation as independent, making iterative refinement nearly impossible.
• Integration: Film production uses pipelines built over decades — tools for color grading, VFX, editing, audio sync. Sora doesn't integrate with any of it.
None of this is surprising to anyone who's actually shipped production software. Demos are optimized for looking impressive. Products are optimized for doing one thing reliably, at scale, integrated with existing workflows.
The demo-to-product gap
When OpenAI showed Sora's demos, they cherry-picked the best results from thousands of generations. That's fine for research — it shows the ceiling of what's possible. But it's misleading if people assume the demo represents typical performance.
