We have officially entered the AI agents that do things era. And the clearest signal of that transition is a piece of open-source software called OpenClaw — which went from 9,000 to 145,000+ GitHub stars in weeks, reportedly caused real Mac Mini supply shortages, and just got its creator hired by OpenAI to lead their personal agents division.
For those not following the story: OpenClaw (originally called Clawdbot) is an open-source AI assistant that is categorically different from chatbots. You do not prompt it and read the response. You install it, connect it to your email, calendar, Slack, WhatsApp, and Telegram, and it handles tasks autonomously — without you asking. It has what its creator calls a "heartbeat": it wakes up on its own and checks on things.
Here is the real-world market signal that caught my attention. The project went viral enough that demand for Mac Minis — the cheap, power-efficient hardware many users run it on — visibly impacted supply. That is not a GitHub trending story. That is an agent system with measurable market effects. Early AI chatbots had enormous usage. None of them caused hardware shortages.
Investor Jason Calacanis claims his company offloaded 20% of tasks to OpenClaw in 20 days and has no plans to hire humans to replace that capacity for at least a year. Take any single anecdote from a VC with appropriate skepticism. But this kind of testimonial, combined with the GitHub star trajectory, suggests real adoption by real users getting real value out of the system.
Peter Steinberger, the creator, was reportedly courted by both Meta and OpenAI. OpenAI won. He is now leading their personal agents division. OpenClaw will remain open-source under a foundation. The talent acquisition tells you what both companies believe is coming.
Here is the part I want everyone to pay close attention to, because it gets buried in the hype: Cisco found third-party OpenClaw skills conducting data exfiltration without users knowing. One of OpenClaw's own maintainers put it plainly: "if you can't use a command line, this project is too dangerous for you."
This is the core tension of AI agents. An agent that can autonomously read your email, manage your calendar, and execute tasks is also an agent that could, if compromised or carelessly designed, read your email and send it somewhere else. The capability that makes the product valuable is precisely what makes it dangerous.
Traditional software has well-understood security perimeters. You review the code, you know what it does, you understand the data flows. An AI agent running third-party plugins is fundamentally different. The attack surface is the agent's general capability to take actions in the world — and every plugin extends that surface.
The security industry has not caught up with this. We do not have mature frameworks for auditing AI agent behavior, for sandboxing agent capabilities, or for detecting when an agent has been manipulated into doing something it should not. Cisco finding data exfiltration in third-party skills is a preview of what a much larger problem looks like at scale.
Steinberger joining OpenAI means this is no longer an indie open-source project. It is a directional signal from one of the most influential AI companies about where they think value is headed: from models that answer questions to agents that take actions. That shift has enormous implications for productivity, for security, and for what it means for software to do something.




