Amazon Web Services CEO Matt Garman has delivered a pointed technical rebuttal to Elon Musk's vision of space-based data centers, highlighting fundamental engineering constraints that could keep computation firmly planted on Earth for the foreseeable future.
Speaking at the Cisco AI Summit in San Francisco, Garman identified mass and launch costs as the primary obstacles. "I don't know if you've seen a rack of servers lately: They're heavy," he noted, emphasizing that current transportation economics make orbital data centers economically prohibitive.
The technical reality supports his skepticism. A modern server rack weighs 1,000-2,000 kilograms fully loaded. Even with SpaceX's Falcon Heavy—currently the world's most powerful operational rocket—launch costs run approximately $1,500 per kilogram to low-Earth orbit. That means $1.5-3 million just to launch a single server rack, before considering power systems, cooling infrastructure, radiation shielding, and the data center building itself.
In space exploration, as across technological frontiers, engineering constraints meet human ambition—and occasionally, we achieve the impossible. But space-based data centers face a constellation of challenges that go far beyond launch costs.
Cooling presents perhaps the most fundamental problem. On Earth, data centers use ambient air or water cooling to dissipate the enormous heat generated by thousands of processors. In the vacuum of space, convection doesn't work. Heat can only be rejected through thermal radiation, requiring massive radiator arrays—which themselves add weight and complexity.
Power generation compounds the difficulty. Modern AI data centers consume hundreds of megawatts. The International Space Station generates just 120 kilowatts from its enormous solar arrays, and that power supports a crew of seven plus life support systems. Generating data center-scale power in orbit would require solar arrays covering acres, or nuclear reactors that introduce their own regulatory and engineering challenges.
Latency creates another constraint. For many applications, the speed-of-light delay to orbital facilities—measured in milliseconds—matters less than fiber optic latency. But for AI training and interactive workloads, round-trip times of 10-50 milliseconds to low-Earth orbit introduce noticeable lag. Higher orbits, which offer more stable thermal environments and less atmospheric drag, push latency even higher.


