An 18-month investigation by The New Yorker has exposed a troubling pattern of contradictions in Sam Altman's approach to AI regulation. While publicly advocating for stringent oversight of artificial intelligence, OpenAI's CEO was simultaneously lobbying behind closed doors against those same regulations.The investigation reveals that Altman pursued billions in funding from Gulf autocracies while promoting responsible AI governance to the public and policymakers. Perhaps most concerning, the report details how he attempted to conceal a post-firing investigation that produced no written report - raising questions about transparency at the world's most influential AI company.I've seen this playbook before in fintech. When someone is working both sides of the regulatory debate, it's not just about policy differences - it's about control. The face of 'responsible AI' telling Congress one thing while lobbying for the opposite creates a credibility crisis that extends far beyond OpenAI.The timing is critical. As governments worldwide race to regulate AI before it gets ahead of them, the industry's most prominent voice for caution appears to have been hedging his bets. The question isn't just whether Sam Altman can be trusted to shape our AI future - it's whether anyone leading a company with commercial interests in AI should be the primary voice shaping regulation.The technology is impressive. The governance theater? That's concerning. And when the gap between public statements and private actions is this wide, we need to ask who's actually steering this technology - and where they're steering it.
|
