Meta just signed one of the largest semiconductor deals in tech history: up to $100 billion with AMD to power what Mark Zuckerberg calls "personal superintelligence." The deal signals Meta's serious commitment to competing with OpenAI and Google in the AI race. It also raises an important question: is anyone actually asking for this?
The agreement will see AMD supply AI accelerator chips for Meta's data centers over the next several years. These aren't general-purpose processors—they're specialized silicon designed to train and run massive AI models at scale. The investment dwarfs most countries' annual R&D budgets.
So what is "personal superintelligence"? According to Zuckerberg's announcement, it's an AI assistant that knows you better than you know yourself. It anticipates your needs, manages your digital life, mediates your relationships, and helps you make decisions. Think Alexa, but if Alexa had read your entire message history, knew your biometrics, and could predict what you wanted before you did.
The technology is genuinely impressive. Meta's Llama models already compete with GPT-4 on many benchmarks. With $100 billion in dedicated AI infrastructure, they could leapfrog the competition on training speed and model scale. AMD is betting its future on becoming the alternative to NVIDIA's GPU dominance.
But here's what I keep coming back to: who asked for this?
Meta is investing more in "personal superintelligence" than the US spends annually on cancer research. They're building infrastructure to power AI that will know more about you than your therapist, your doctor, and your best friend combined. And the use case is... helping you draft better posts?
