Jack Conte, CEO of Patreon, has come out swinging against AI companies that train on creator content without compensation. Speaking at SXSW, he called the "fair use" defense legally and ethically hollow when billion-dollar companies are profiting from artists' work.
This is the creator economy versus the AI economy, and it's about to get very messy.
Conte's argument is simple: if fair use truly justified training AI on copyrighted content, why are the same companies negotiating licensing deals with major publishers? OpenAI, Anthropic, and Google have all signed agreements with news organizations, paying millions for access to archives.
But individual creators—artists, writers, photographers, musicians—get nothing. Their work is scraped from the web, fed into training datasets, and used to build models worth billions. When they complain, they're told it's fair use.
Fair use was never meant to cover this.
The doctrine exists to allow commentary, criticism, education, and parody without permission. It balances copyright protection against free expression. What it doesn't do—or wasn't designed to do—is permit commercial entities to copy millions of copyrighted works to build competing products.
AI companies argue that training is transformative, that no single work is reproduced, and that the models create new content rather than copying existing work. These are legally plausible arguments, but they're being tested in multiple lawsuits right now.
Conte isn't alone in his frustration. The Authors Guild, the Screen Actors Guild, and several visual artists' organizations have filed lawsuits against AI companies. The core claim is straightforward: you can't build a billion-dollar business on our work without paying us.
But if courts side with creators, AI development as we know it might grind to a halt.
Large language models require enormous datasets. If every piece of training data needs to be licensed, costs explode and smaller companies get locked out entirely. Only the biggest players with the deepest pockets could afford to train models. That might actually be the outcome—regulatory capture disguised as creators' rights.
