The headlines this week screamed about new U.S. export controls on AI chips, and semiconductor stocks took a hit. But if you actually read the draft framework instead of just reacting to the Bloomberg alert, the story is more nuanced than "Nvidia screwed."
Here's what's actually happening. The U.S. government released draft rules on March 5 that create a tiered licensing structure for AI chip exports. Under 1,000 GPUs? Basic paperwork. Between 1,000 and 200,000 GPUs? More scrutiny. Above 200,000? You need host-government security commitments and detailed end-use monitoring.
Yes, that adds friction. If you're Jensen Huang trying to ship 50,000 H100s to a data center in Singapore, you're now dealing with regulatory latency that didn't exist six months ago. That's real revenue risk for Nvidia and AMD, especially on international orders from hyperscalers building out in Europe and Asia.
But here's the part most people are missing: the rules explicitly exempt domestic U.S. data center demand. And that's where the vast majority of AI infrastructure spending is happening right now.
Satya Nadella just committed $80 billion in data center capex for 2025-2026. Most of that is U.S.-based. Amazon Web Services, Google Cloud, and Meta are all building massive GPU clusters in Virginia, Iowa, and Oregon. None of that demand gets touched by export controls.
So the honest read is: these controls are a headwind for Nvidia's international growth, but they don't disrupt the core domestic AI buildout that's driving the current capex cycle.
Now, if you want to get technical about portfolio positioning, here's where it gets interesting. Export controls on chip sales don't affect the companies that make the equipment used to chips. 's extreme ultraviolet lithography machines are still going to to produce the chips Nvidia designs. , , and supply process equipment to every foundry on the planet building advanced nodes.
