When members of the Parents and Kids Safe AI Coalition discovered that OpenAI was their primary funder, the reaction was swift and angry. "It's a very grimy feeling," one nonprofit leader told the San Francisco Standard.
The coalition, formed to advocate for California's Parents and Kids Safe AI Act, turns out to be entirely funded by OpenAI to the tune of $10 million. The legislation would mandate age verification systems and additional safeguards for users under 18 across AI platforms.
Here's where it gets interesting: Sam Altman, OpenAI's CEO, heads a company that provides age verification services. So the world's leading AI company is secretly funding advocacy for regulations that would require their own products—and conveniently, products sold by their CEO's other venture.
This is regulatory capture 101, and it's exactly the kind of thing that makes people distrust tech companies. When you're funding advocacy groups without disclosure, sending "pretty misleading" emails according to coalition members, and pushing for regulations that benefit your business interests, you're not building trust.
The technology angle here is real—age verification for AI systems is a legitimate challenge worth discussing. Kids probably shouldn't have unfettered access to sophisticated AI that can generate anything on demand. But that conversation needs to happen transparently, not through astroturfed coalitions that hide their funding sources.
Several coalition members quit when they learned the truth, which tells you everything you need to know about how OpenAI handled this. If your advocacy strategy falls apart the moment people learn you're behind it, maybe the problem isn't the disclosure—it's the strategy.
The irony is rich: a company building systems meant to be helpful and truthful was running a stealth advocacy campaign that misled the very child safety organizations it claimed to support. The technology is impressive. The ethics, not so much.




